Learnings on Deploying Multi-Site Systems

Hey reader! First of all, a Merry Christmas to you and your family!🎄🎉 :’D Thank you so much for visiting this site. Whew, never thought that this blog would contain this many posts :)) Initially created to be a repository of stuff I discover and things I want to keep for future reference but thanks to my CS 255 class, CodeSteak happenings, and my training at work, this blog already contains 28 posts and counting! So super thanks again! 😄 💕

Lately, we had one project that consists of multiple connected websites which also means multiple EC2 instances with different repositories, running different processes with different purposes – some run Rails, some run Redis and Sidekiq, while some run Mongo. The diversity of this project comes with the need for unification and consistent setting of environment variables through all the instances. In addition to that, there must also be an easy and non-tedious way to run and terminate (as well as to check) each of the instance’s processes. Also for sustainability, a fast and easy way to create  all the instances at once (and tear them all at once too especially every after QA sessions) would be ideal.

These diverse and flexible requirements for the instances may seem at first difficult to do and maintain. But thankfully, AWS offers the CloudFormation feature where you can build stack templates to specify what type of instances you want to open, what security groups you want assigned, what AMIS you want the instances be sourced from, what start-up scripts you want included, and a whole lot more. And from these stack templates, you can build your instances – infrastructure as code indeed!

In this blog post, we’ll just be seeing an overview of AWS CloudFormation since its awesomeness deserves 1 whole post (and even more)!

The CloudFormation module can be seen in your account under management tools:

Screen Shot 2015-12-27 at 5.17.55 PM.png

A template for AWS Cloud Formation looks like this:

 "Resources" : {
   "SampleCFInstance" : {
     "Type" : "AWS::EC2::Instance",
     "Properties" : {
       "SecurityGroupIds" : [ "sg-c52c20a1" ],
       "KeyName" : "adelen-key",
       "ImageId" : "ami-1e6f737f",
       "InstanceType" : "t2.micro",
       "Tags" : [ {"Key" : "Name",
                   "Value" : "SAMPLE-CLOUDFORMATION-INSTANCE"}

What our JSON code above does is tell CloudFormation to create an EC2 instance with type t2.micro from an AMI with image id ami-1e6737f and include it in the sg-c52c20a1 security group. Afterwards, tag it with the name “SAMPLE-CLOUDFORMATION-INSTANCE”. Simple, isn’t it? 🙂

Once our JSON code is ready, we can now  upload it in our Cloud Formation wizard (by clicking Create Stack in the CloudFormation main page):

Screen Shot 2015-12-27 at 5.25.50 PM.png


After upload is finished, you will be asked for the stack name and then will be redirected to the progress window where you can see the progress of your stack creation. From here, you can also edit and delete your stacks as well.

Screen Shot 2015-12-27 at 5.33.43 PM.png

So yay! With CloudFormation, it’s already easy to setup and tear down setups of our project. With this, we can just create the whole stack (and even build multiple stacks of the same application) for every QA session and also tear down right after to save on costs. Also, the use of CloudFormation is free! You only pay for the AWS instances, load balancers, and other resources that you create through CloudFormation.

Now on to the challenge of accessibility and control in all our instances. Surely, accessing the instances one by one to start, check, or stop each of their respective processes is possible, but of course, time consuming. The time to do those can just be allocated to more productive tasks – such as building features.

So to save time, we used a Remote Control instance that serves as a jump-off point to other servers. We can now access, start and stop processes, and check the other instances using only this Remote Control instance.

AWS-RC-DG - New Page.png

This Remote Control instance is equipped with scripts to pull from the repositories of each of the instances, prepare (bundle, precompile, etc) them, and start, as well as stop them. Imagine the convenience of being able to access all the other 5 instances with just one instance.

Now we mentioned before that we can specify a specific AMI in our  CloudFormation from which our instances will be based from. As I was building the AMI for the web instances, I encountered various scenarios and for future reference, as well as to help other developers who might encounter the same problems, here are some of the steps I took (and also some of the learnings I had):

On setting up the WEB Instance (from where we’ll capture our AMI):

  1. On a freshly opened instance, one of the first steps is to update apt-get packages and install git (since it is where we will be pulling our repository from)
    sudo apt-get update
    sudo apt-get install git
  2. Make sure to add your public and private keys to your ~/.ssh directory. This will be needed when you clone your repositories or when you need to communicate with other instances without the .pem key file.
  3. When your keys (i.e. id_rsa) is too public, they will just be ignored.So make sure to give them the correct permissions:
    sudo chmod 600 ~/.ssh/id_rsa
    sudo chmod 644 ~/.ssh/id_rsa.pub

    Else you may encounter the following errors:

    Permissions for ‘id_rsa’ are too open. It is required that your private key files are NOT accessible by others. This private key will be ignored.

  4. When you have set your keys in place, start cloning the repos and checkout the branch that you will be running your applications in. After your repositories are in place, continue downloading other dependencies (i.e. Ruby, Rails, NodeJS, Nginx, etc).
  5. Make sure that the environment variables needed by your application are set. (i.e. RAILS_ENV=staging)
  6. Setup other configuration files as needed (i.e. mongoid.yml)
  7. And before you capture the AMI for this web instance, make sure that each of your sites is working. Since this will be the base AMI from where all of your other instances will be born.

On setting up Nginx (in Staging and Prod environments):

The first time I tried building this system two months ago, I had everything manual, no IP tables, no Nginx stuff, etc. Now I learned that setting up the system is sooooo much easier with these on hand.

While setting this up and figuring how things work, I was able to access sites Rails A and Rails B at <IP>:8080, so I thought the setup might be working. But looking further in our configuration, it doesn’t seem right since:

  • Nginx listens at port 1500
  • Nginx’s upstream is set at localhost:8080
  • Rails is running at port 8080

So if Nginx is running (and working properly), we should be able to access it at 1500. And there seems to be a missing link: who/what forwards our requests from port 80 to our Nginx at port 1500? :O

Tinkering with ports and Nginx, I realized that what I was missing was an IP table configuration that would redirect all traffic from port 80 to Nginx at port 1500.

sudo iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-ports 1500

Tried executing this script and then voila, Rails A and B are now accessible at port 80 (no need to specify the ports then!).

Also, the Nginx processes does not persist through instance ons and offs so to make sure that Nginx automatically starts everytime you turn on the instance, you may add the following line in your /etc/rc.local which is always ran at system bootup:

sudo -u nginx /etc/init.d/nginx start

Also based on convention, we create a user nginx that would own our Nginx processes:

sudo adduser --system --no-create-home --disabled-login --disabled-password --group nginx

And then edit nginx.conf, set user as nginx, and then restart Nginx for our changes to take effect.

Configuring the Remote Control Instance:

  1. Populate etc/hosts with the IP addresses of the other instances. So we can easily issue commands such as
    ssh ubuntu@rails-a
  2. Setup the scripts that would allow you to connect to the other instances like the following:
    ./access.sh rails-a

    With access.sh as:

    ssh -o "StrictHostKeyChecking no" -t ubuntu@$1
  3. When accessing other instances, you might notice that the “Welcome Ubuntu” message always appears and is included in your output. To disable this, just create a .huslogin file in the root of your instances (you may also include this in your WEB AMI :D).
    touch ~/.hushlogin

[OPTIONAL]  CloudFormation CFN tools:

In our CloudFormation scripts, to ensure that an instance has been successfully created, after all the startup scripts have been run, we instruct the instance to send a signal to the CloudFormation wizard to signal that instance creation is done (i.e. CreationPolicy). To enable this communication between your instance and CloudFormation, install the cfn tools via the following commands:

sudo apt-get -y install python-setuptools 
mkdir aws-cfn-bootstrap-latest 
curl https://s3.amazonaws.com/cloudformation-examples/aws-cfn-bootstrap-latest.tar.gz | tar xz -C aws-cfn-bootstrap-latest --strip-components 1 
sudo easy_install aws-cfn-bootstrap-latest which cfn-signal

You may read further on CreationPolicy attribute of CloudFormation here.

So that’s it! The past three days before Christmas have been intense. :)) Filled with so many discoveries and new learnings. 🙂

Again, Merry Christmas and a Happy New Year! 💕


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s