Automatic Traffic Distribution with AWS Elastic Load Balancer

Load balancing is a common practice for multiple-server setups. When network traffic o your site already becomes heavy, you have the option to setup more servers and have a load balancer be the one to balance the traffic among the servers. For this purpose, Amazon Web Services provide its Elastic Load Balancers.

AWS Elastic Load Balancers automatically distributes incoming traffic to your instances. Basically, the load balancer is your single point of contact that then routes your requests and traffic to your other instances. When one instance fails, ELBs just reroute your traffic to the other running instances. With ELBs, your system also becomes scalable because you can just add or remove EC2 instances based on need.

Up next we will have our own ELB running in only a few steps by first setting up the instances that we will be using, configuring our ELB, and testing that our load balancing is indeed working. Simple!

Instances Setup:

> First Instance:

  1. Sign up for an Amazon account if you don’t have one yet. As of writing, Amazon offers Free Tier services for 1 year free.
  2. Now open or create the first instance that we will be attaching to our Load Balancer.
  3. Install Apache or any web server. In your security groups, make sure that your instance is accessible via Port 80. You may check if its accessible through your browser via the Public IP.
  4. Once you have verified that it works. You may change the default home page. If you’re using Apache, its at /var/www/index.html. You could put something that is indicative that it belongs to instance 1. I changed mine to <h1>I’M IN INSTANCE 1 \o/</h1> that looks like the following:Screen Shot 2015-12-10 at 11.35.58 AM.png
  5. Refresh your browser to confirm that it works.

> Second Instance:

  1.  You may choose to create an image from our first instance and launch a second one from that image or you may also launch another instance and repeat our steps above (install apache, etc)
    • Tip: Choose different availability zones in your instance settings for each of these instances for now. In the later part of this post, we’ll se why.Screen Shot 2015-12-10 at 11.23.51 AM.png
  2. You may want to change the /var/www/html/index.html for your 2nd Instance to make it indicative that it belongs to instance2. This will be useful later when we test if our load balancing works. Mine looks like the following:Screen Shot 2015-12-10 at 11.36.06 AM
  3. Make sure that your 2nd instance is accessible via port 80 as well. Verify it with your browser.

Here is a summary of the instances that I created in the different availability zones:

Screen Shot 2015-12-10 at 12.03.38 PM

I created Instance 1  and Instance 2 in us-west-2a and us-west-2b availability zones respectively.

Now that we have our instances running, we can now setup our AWS Elastic Load Balancer:

> AWS Elastic Load Balancer:

  1. On your left panel, click on Load Balancers under Load Balancing.

Screen Shot 2015-12-10 at 11.41.26 AM.png

2. Click on Create Load Balancer and you will be redirected to the first step where we can define the name of the load balancer and set other configurations which includes checkboxes for the following:

  • Create an internal load balancer
    • Internal Load Balancers vs Internet Facing Load Balancers
      • Internal Load Balancers are load balancers that accept traffic from all over the internet to route to your instances. Internal Load Balancers, on the other hand, routes traffic to instance in private subnets. Possible scenarios that would contain both is having an internet facing load balancer to distribute traffic from the public to your web servers and then the web servers send their database requests through another internal load balancer that routes and distributes traffic among your database servers.
      • For this example, let us leave this checkbox unchecked.
  • Enable advanced VPC configuration
    • When this is checked, we can choose which subnets(and availability zones) we want to include in the load balancer. if this is left unchecked, by default, AWS includes all.
    • Let’s go and check this checkbox!

By default, AWS gives you one subnet in every availability zone. So even without creating one, you’ll be provided with the pre-created ones as options.

So in select subnets, let us select those subnets in the availability zones where we created our instances in Part 1:Screen Shot 2015-12-10 at 12.07.51 PM

For my elastic load balancer, I chose the subnets on us-west-2a and us-west-2b as there are the availability zones where my instances are.

Note: Multiple Availability Zones
Remember a while ago when we created instances in different availability zones? And when we defined our load balancer’s subnets, we also chose different availability zones? The reason is that with multiple availability zones, when one of the zones suddenly dies or malfunctions, our traffic can still be redirected to our other availability zones which hopefully are still alive. As the saying goes, don’t pull all your eggs in one basket!

Also whenever we select less than two availability zones, AWS never fails to warn us about it:

Screen Shot 2015-12-10 at 7.01.15 PM.png

3) After much segue, let’s continue configuring our load balancer and choose a security group and configure its settings. On each of the pages, we can just leave the defaults until Health Checks.

4) Health Checks! Health checks are what our load balancers regularly perform to check if our instances are still running and are able to accept requests. Load balancers regularly sends pings and requests to our instances to check if they are healthy (i.e. up and running). In case one of your instances fails the health check, its traffic are rerouted to the other running instances.

We can leave the ping protocol and port to be HTTP and 80 but we’ll change  the ping path from /index.html to / so that it will redirect to whatever is the homepage of our site.

Screen Shot 2015-12-10 at 6.04.54 PM.png


5) Next, we can now add our EC2 instances.

Screen Shot 2015-12-10 at 6.06.07 PM.png

We just choose our two previous instances. For the default, configurations, you might notice that Enable Cross Zone Load Balancing  is checked. Cross Zone Load Balancing, released in 2013, is a relatively new feature from AWS ELBs. This aimed to solve the discrepancy between traffic from different availability zones. It prevents the problem of caches where machines remembers a specific site/IP it will access which lead to uneven distribution of traffic. Cross Zone Load Balancing now load balances with instances instead of zones. Before, if you have 2 availability zones, A and B, and you have 1 and 2 instances in each zone. Your single instance in Zone A would be receiving 50% of the traffic while only 25% each for the instances n zone B. Now, with Cross Zone Load Balancing, each of your instances, regardless of what availability zone they are in, will receive 33.33% of the traffic.

So just keep it checked!

6) Next we can add tags if we want. Then let’s go and proceed to review and create our load balancer! Yay!

8) As soon as the load balancer is already being created, you may find additional information in the bottom panel.In the Description tab, you may see the following:

Screen Shot 2015-12-10 at 6.13.24 PM.png

Some important things to note here:

  • The DNS name is the URL where we can access the ELB to get to the site. It is the one access point that routes our requests either to Instance 1 or Instance 2.
  • The Status, on the other hand, shows the health of the instances included in our Elastic Load Balancer. Upon creation, we can see that no instances are in service yet because registration is still in process. Wait for a while, until all becomes InService:


  • Note: You may also add EC2 instances to an already existing and running Elastic Load Balancer by right clicking on your load balancer and clicking Edit Instances:
    • Screen Shot 2015-12-10 at 11.38.46 AM


Testing our Load Balancer:

So how do we know that our ELB works? How do we make sure that our request gets routed to both instance? Remember from a while ago, that we made the index.html pages of our instances different from one another and make them indicative of from which instance they are coming from? The following are just some ways on how we can harness that difference to see that indeed our traffic gets routed to both instances:

  1. When we go to the URL via our browser. Try refreshing the page multiple times, you should be seeing messages from both instance 1 and instance2.
  2. We can also test further by running a bash script to curl our page:
    for (( i=$start; i&amp;lt;$end; i++ )) ; do
    $(curl &amp;gt;&amp;gt; output.txt)

    We can run it by:

    ./check >/dev/null 2>&1

    And then open the output file, output.txt, we should see something like:

    Screen Shot 2015-12-10 at 6.33.24 PM.png

  3. We can also try making Instance 1 unreachable from the ELB by closing port 80 via the security groups. When we run our bash script, traffic should only be routed to Instance 2. Output.txt should look like the following:
    Screen Shot 2015-12-10 at 9.27.16 PM.png


Common Mistakes:

If your ELB doesn’t work the first time around, don’t fret! You may have just overlooked something. 🙂 Here are some common mistakes (that I, myself, also experienced while doing this blogpost LOL):

1) Choosing the Wrong Subnets
As we did above, when we selected the subnets for our load balancer, we chose those that resided in the availability zones where our instances are. Not choosing a subnet where one of our instances resides would render it out of service.

Example: Say we added only subnets in us-west-a and us-west-c,

Screen Shot 2015-12-10 at 6.48.19 PM.png

Instance 2 which is in us-west-2b would be rendered out of service as our ELB is not configured to route messages to its availability zone, us-west-2b.

2) The load balancer is not accessible on port 80 (for the browser check example)

Awhile ago, if nothing happened when you tried to access the DNS name of the load balancer you have created, it is possible that the load balancer is not open on port 80. If it is, make sure your instances are running and are also open on port 80.

By default, the security group that is assigned to newly created ELBs (if not changed) is the default VPC security group.

3) The instances included in the load balancer may not be accessible to the load balancer

If your instances are up and running but ELB finds them still out of service, you may first want to check that the instances are no longer in the registration phase. You can do so by hovering on the “Why” question mark icon beside your instance state. If they no longer are and the error message says that your instances has failed the health check, check that your instances are actually running AND are accessible to the ELB at port 80. 🙂

So there, in this post, we were able to see what an AWS Elastic Load Balancer is, what is does and we were also to able to build one on our own! How awesome is that! 😉 For the purposes of distributing traffic, other options include the use of Route 53, HAProxy, and Nginx either as standalone load balancers or sometimes even combined with AWS ELBs. AWS ELBs though still remain as one of the top choices with AWS users as it’s incorporation is easy if you already have been using EC2. Configurations are also sufficient  (unless system requirements are really much), and it’s elastic, meaning it could be used to scale up for increased availability as well as scale down to reduce cost when extra power is not needed . 🙂

So that’s it for an introductory post to AWS ELBs! Thanks for reading! 🙂


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s