[Quick] Handy AWS S3 Bucket Policies

Granting of Bucket Access to Another AWS Account

     "Version": "2012-10-17",
     "Statement": [
         "Sid": "DelegateS3Access",
         "Effect": "Allow",
         "Principal": {
         "AWS": "arn:aws:iam::<account-id>:root"
    "Action": [
    "Resource": [

CheatSheet: Elixir + Phoenix Installation (OS X)

Hi everyone! 🙂

It’s been a while since I last updated this blog and a lot of things has surely happen – new technologies, new projects, new hobbies — indeed a lot of new things. One year surely flew fast!

So for this comeback post, I’ll share with you today my notes on installing an Elixir/Phoenix setup on a Mac OSX. The first time I did this was a year ago and I had to do it again last week — not much have changed but just thought of documenting it in case it’ll also help anyone out there.

So first, what is Elixir? Regular readers of this blog might have noticed that this is the first post that mentioned it. Elixir is a functional programming language that is built for distributed and fault-tolerant systems. Elixir compile to Erlang bytecode which then runs on BEAM (Erlang’s VM). Before we proceed with Elixir, what then is Erlang? Erlang is also a concurrent functional programming language that first appeared in the 1980’s which was used mainly back then for telecommunications to support hundreds of thousands of concurrent communications. It was developed in the Ericsson company by Joe Armstrong and company.

With this awesomeness of Erlang, Elixir leverages it by running on top of the Erlang VM and providing a syntax and feel that is reminiscent of more modern programming language such as Ruby and Python making it a top choice for writing embedded software application as well as web applications.

And on this post, we shall be seeing Phoenix – an MVC web framework running with Elixir. Let’s get started!


To setup a running Phoenix app, we need to set up the prerequisites first before we install Phoenix.

1. Install Elixir via HomeBrew

$> brew install elixir

2. Install the Hex Package Manager

$> mix local.hex

3. Check the Elixir version

$>  elixir -v

* Phoenix requires Elixir 1.4 (or later) and Erlang 18 (or later). Your output would look something like this:

Erlang/OTP 19 [erts-8.1] [source-77fb4f8] [64-bit] [smp:4:4] [async-threads:10] [kernel-poll:false]

Elixir 1.3.4

4. Install the Mix Phoenix Archive

$> mix archive.install https://github.com/phoenixframework/archives/raw/master/phx_new.ez

* This contains the Phoenix application as well as the compiled BEAM files

5. Install NodeJS (Optional)

By default Phoenix uses brunch.io to compile your app’s static assets. Brunch.io on the other hand requires the Node Package Manager (NPM) to install its dependencies and NPM requires nodeJS. If you wish to not install NodeJS, you may skip this step and head to the Installation Verification section below.

a. Install NVM

To install NodeJS, let’s first install the Node Version Manager (NVM). NVM provides a way to organize and switch between several version of NodeJS in your system. Developers who have experienced working with several projects requiring different NodeJS versions appreciate the importance of this. To install Phoenix, we’ll be using only one version but since we’re already setting it up, let’s do it the right way.

$> curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.33.2/install.sh | bash

After running the above command, the following lines will be available in your ~/.bash_profile, ~/.zshrc, ~/.profile, or ~/.bashrc.

export NVM_DIR="$HOME/.nvm"
 [ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" # This loads nvm
 [ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion" # This loads nvm bash_completion

It installed mine in ~/.bashrc though nvm still wasn’t by my terminal even if I already restarted it so added the above lines to my ~/.bash_profile too and it already worked!

b. Install NodeJS via NVM

$> nvm install node

Once it’s done, we can verify that NodeJS is indeed installed:

$> node -v


a.k.a. Creating our first app!

1. Create a Phoenix app

$> mix phx.new myfirstapp

2. Run the server

$> cd myfirstapp
$> mix phoenix.server

Note: You may need to run mix ecto.create in case you encounter the following error,

[error] Postgrex.Protocol (#PID<0.3421.0>) failed to connect: ** (Postgrex.Error) FATAL 3D000 (invalid_catalog_name)

And yay, that’s it! You now have a up and running Phoenix installation. 🙂

So that’s it for this post and hope to see you again in my future posts! Will be sure to write more on Elixir and Phoenix.

See you! ❤


PostgreSQL: Dump Shortcuts

Creating a Database Dump

$> pg_dump -U postgres -d db_name -W --verbose > dumb_file_name.sql

For example, if we want to create a database dump for database app_dev and save it in a app_dev_aug2017.sql file, we can do:

$> pg_dump -U postgres -d app_dev -W --verbose >app_dev_aug2017.sql

Restoring Database Dump (from pg_dump)

$> psql -U postgres db_name < /path/to/database/dump/file.sql

* Feel free to replace postgres with your database username

AWS: CloudFront Static Caching

Hi, dear reader! Happy Independence Day from the Philippines! ❤

In this blog post, we will be looking into AWS CloudFront, AWS’s Content Delivery Network (CDN) service. This post is actually already long overdue and has been sitting in my drafts for about some time now.

One of the factors that could affect user experience when it comes to websites and applications is loading time. Have you encountered a site that you are very excited about, but unfortunately, its images and content take A LOT of time to load? And for every page, you wait for about 2 minutes or more for images to load? It can definitely take away excitement, doesn’t it?

This is a nightmare for both business and product owners as this could affect conversion and may increase bounce rates. To solve this, there are a lot of possible solutions, and on this post, we will be seeing how AWS CloudFront can be used as a caching mechanism for our static files.

In almost every website, static files exist, let it be images, CSS files, Javascript files, or whole static pages. And since these don’t change too frequently, we can just cache them so the next incoming requests won’t hit our server anymore (and could even be served faster as AWS CloudFront determines the nearest edge location to your customer).

AWS CloudFront accelerates content delivery by having many edge locations. It automatically determines the edge location that could deliver fastest to your edge customers. For a quick trivia, we actually have one edge location here in Manila. 🙂 CloudFront, too, has no additional cost, you only accrue cost everytime your content is accessed.

If you’ll be following this tutorial and creating your bucket, hope you can place it in the US Standard region, the endpoint for AWS S3. Based on my experience, having your new bucket in a different region may cause faulty redirects (i.e. temporarily routed to the wrong facility) in the beginning. And since we will be immediately experimenting with AWS CloudFront, these faulty redirects may be cached.

I. Creating S3 Bucket

AWS Cloudfront works seamlessly with AWS services like EC2 and S3 but also with servers outside of AWS. But for this quick example, we will be working with AWS Simple Storage Service (S3).


Screen Shot 2016-06-12 at 2.05.10 PM.png

II. Uploading Your File

Screen Shot 2016-06-12 at 2.02.43 PM.png

Also make sure that the file is viewable to everyone. before you access it via CloudFront. Else, the permission denied error message might be the one that will be cached.


Once you’re done giving permissions. Try accessing the image we just uploaded via the link on the upper part of the properties pane.

For our example, we have:

$ https://s3.amazonaws.com/s3-cloudfront-bucket-01/sample

 Screen Shot 2016-06-12 at 2.08.16 PM.png


III. Creating AWS Cloudfront Distribution

We now go to CloudFront from our Services menu.

Screen Shot 2016-06-12 at 2.09.49 PM.png

Then we click the ‘Create Distribution’ button.


For our purposes, we will choose ‘Web’:


And choose the bucket that we just created a while ago as the origin:

Screen Shot 2016-06-12 at 8.40.15 PM.png

We can retain all other defaults for this example. If you wish to explore more on the other options, you may click on the information icon (filled circle with i) for more details on a specific option.

Once done, we just need to wait for our distribution to be deployed.

Screen Shot 2016-06-12 at 8.41.42 PM.png

IV. Accessing Your Static File via AWS Cloudfront

Once your CloudFront distribution’s status is DEPLOYED, you may now access your static file at the domain name specified by CloudFront.

AWS S3 Link:


AWS CloudFront Link:


We just replaced the S3 URL and bucket name with the assigned AWS CloudFront domain name, which is d35qrezvuuaesq.cloudfront.net in our case.

V. Updating Your Static File

A. Deleting / Modifying Your Static File in AWS S3

Say we want to update the static file that we have cached with AWS CloudFront, modifying or deleting the static file in AWS S3 won’t make any changes to the file when accessed via the CloudFront URL unless cache has expired and a user has requested the specific file.

B. Versioning Your Static File

For purposes of updating or showing another version of your static file to the user, AWS recommends that users employ a way to distinguish different versions of your static files instead of naming them with the same name.

For example, if we have sample.png, for its version 2, we can have the name sample_2.png. Of course, this approach would require one to update the locations where he/she used the old links with the new updated links.

C. Invalidating A CloudFront Asset

If it is too tedious to change the occurrence of the old links, another method still exists: asset invalidation. AWS CloudFront allows the invalidation of assets even before cache expires to force edge locations to query your origin and get whatever is the latest version.

Note though that there is a limit to the number of assets that can be invalidated in a day.

To invalidate an asset, we choose the distribution we are interested in from the list of our existing CloudFront distributions:


Once chosen, we then click on the ‘Invalidations’ tab and click on ‘Create Validation’.


We then put the object path we want invalidated. This field also accepts wildcard elements so ‘/images/*’ is also valid. But for our purpose, since we only want sample.png to be invalidated, we put:

Screen Shot 2016-06-12 at 9.40.16 PM.png

Yay, we just need to wait for our invalidation request to be completed (~ 5 mins) and we may now access the same CloudFront URL to get the latest version of our static file.



So yay, that was a quick overview of AWS CloudFront as a caching mechanism in AWS. 🙂

Thanks for reading! ‘Til the next blog post! ❤


JS Weekly #1: Underscore, Lodash, Lazy, Apriori, and Grunt

Hi dear reader!

Hope you’re having a great June so far! 🙂 Welcome to this week’s dose of weekly JS!

For this week, we have:

  • Underscore
  • Lodash
  • Lazy
  • Apriori
  • Grunt

Day 1. Underscore.js

As a quick warmup for this series of Javascript adventure, I took on something more familiar for Day 1 which is Underscore.js. A JS library which we also saw in a previous blog post early this year: Underscore.js: Your Helpful Library.

Underscore.js provides a lot of functional programming helpers. It allows for easy manipulation of collections, arrays, objects, and even functions.

Screen Shot 2016-06-11 at 6.28.23 PM.png

For a quick application of Underscore.JS, we have a simple Text Analyzer that allows word frequency tracking and word highlighting with HTML, CSS, Underscore.js, and jQuery.

Screen Shot 2016-06-11 at 6.34.18 PM.png

For this application, we mostly used uniq, map, and reduce (which is very helpful!!!) functions, as well as Underscore templates.

Day 2. Lodash

For Day 2, we have Lodash, a JS library that is very similar to Underscore (in fact Lodash started as a fork of Underscore but was mostly rewritten underneath after).

Lodash presents a whole lot of functional programming helpers as well.

Screen Shot 2016-06-11 at 6.29.31 PM.png

To quickly use Lodash, we have a very simple application that allows the input of students’ name and eventually groups them into the specified input. This app uses HTML, CSS, jQuery, together with Lodash.

To make this application a little different from our Underscore app, this app focused on DOM manipulation (i.e. wrapInTD) in addition to text and data processing.

Screen Shot 2016-06-11 at 6.36.15 PM.png

Day 3. Lazy.js

Woot, 2 days down, we’re on Day 3, the game changer!

Day 3 has become a game changer for this JS series as this is the first time I used Node.js to quickly apply the JS library for the day. Starting out with Node.js, luckily, was not too difficult as npm install commands were already a little bit familiar from projects before.

Screen Shot 2016-06-11 at 6.29.49 PM.png

Lazy.js presents almost the same functionalities as Underscore but as its official site says, it’s lazier. So what does it mean to be lazier?

Recalling from Underscore, if we want to take 5 last names that start with ‘Victoria’, we do:

var results = _.chain(people)
 .filter(function(name) { return name.startsWith('Victoria'); })

But taking off from procedural code, the following seems to be lazier … and also faster. Why? – ‘Cause we already stop once we complete the length:5 requirement.

var results = [];
for (var i = 0; i < people.length; ++i) {
  var lastName = people[i].lastName;
  if (lastName.startsWith('Victoria')) {
    if (results.length === 5) {

And the way Lazy.js evaluates the following chain of code is along the lines of the above procedural code.

var result = Lazy(people)
 .filter(function(name) { return name.startsWith('Smith'); })

Inspired by one blog post I found on the web by Adam N England, I started the quick application with a section on benchmarking. From which, I also met for the first time another npm plugin, bench. bench is a JS utility that allows side by side comparison of functions’ performance.

Screen Shot 2016-06-11 at 6.38.05 PM.png

This application was a great learning experience as it also served as a playground for Node.js (i.e. requiring npm packages, own files, using exports, etc).


Moving on from benchmarking, in this app, we were also able to harness some of the capabilities of Lazy.js which includes indefinite sequence generation and asynchronous iteration.

Indefinite Sequence Generation Sample:

var summation = Lazy.generate(function() {
  var sum = 0, start = 1;

  return function() {
    sum += start;
    start += 1
    return sum; 

// undefined

// [1,3,6,10,15,21,28,36,45,55]



Day 4. Apriori.js

Yay, Day 4! My graduate class for this semester just ended and one of our final topics was on Unsupervised Mining methodologies which included Market Basket Analysis.

For work, we also have been looking into the Apriori algorithm for Rails as we already have it for R. Wanting to investigate the Apriori algorithm more, I tried looking for a JS plugin that implements it. And luckily, I found apriori.js!

Screen Shot 2016-06-11 at 6.28.46 PM.png

Documentation was quite limited so I learned to read the repository’s tests and also its main code to get to know the available functions.

For the quick app, we have a Market Basket Analyzer that outputs the associations found with their respective support and confidence values. Input includes the minimum support and minimum confidence but are optional.

Screen Shot 2016-06-11 at 6.40.45 PM.png

Day 5. Grunt

Woot! And finally for Day 5, we have Grunt! Being really new to Grunt, I started Day 5 by reading the book Automating with Grunt. Grunt is almost similar with Rake, a Ruby tool that we use to define and run tasks too.

Screen Shot 2016-06-11 at 6.30.10 PM.png

One of the quick applications I used Grunt with is a weather fetcher on openweathermap. This an example of multitask, a task that can have multiple outputs.

Running grunt tasks is easy, for example, to run the weather app we just need to do:

$ grunt weather


In one of the quick apps too, I was able to discover and incorporate a grunt plugin, grunt-available-tasks which makes viewing available tasks easier and more colorful (literally!)

So there! Yay, that’s it for Week 1 of this Days of JS project! ❤

Stay tuneeeed for more! 🙂

Thanks, reader, and have a great week ahead!!!

Wings of Maroon and Green

It’s graduation season again in UP Diliman woo! As sunflowers start to grow, memories and nostalgia start to creep in hahaha. It’s been 2 years since college ended and life after college has been pretty awesome and hopefully I’m still in the right track.



So tonight after running a 5KM at my “acad oval” here in BGC (the track around Mind Museum), I came through some old files in my computer and stumbled upon this graduation speech from our 2014’s commencement exercise.

I am quite amazed by how these words rang true with conviction and how these words continue to ring true and serve as a daily guide. Truly, this speech won’t be made possible without all the experiences and all the people I met and who became a part of this journey.

For the sake of nostalgia and memories, let me share it here.

To our distinguished guests of honor, dear administrators, foundations of education, teachers, mentors, staff, alumni and benefactors, dear parents, fellow graduates, ladies and gentlemen, a pleasant evening.

Today, we are gathered in celebration of honor and excellence, the most awaited day of our lives: our GRADUATION. As we come together gracefully dressed in our best attire, we remember and are thankful that we have surpassed all those sleepless nights where we are too busy cramming papers due midnight or reviewing for exams the next day that no matter how we say: “Keep Calm Wala Pang 11:59”, we still feel that stint of panic within ourselves. Surely we will also never forget that joyous feeling upon receiving our exam results with flying colors or perhaps that first academic heartbreak when we learned that our best wasn’t enough and how we promised, with full conviction, to do better the next time.

For the past years, my fellow graduates, our days were filled not only with books, machine exercises, laboratory sheets, bluebooks, long exams, term papers, homeworks, quizzes, field trips, calculators, asymptotes, sine cosines, algorithms, vectors, gravity, refraction, but were also filled with laughter, joy, sadness, tears, surprises, hopes, and friendship – a unique mix for each of us that contributes to who we are today.

Our journey wasn’t easy. There may even be days when we doubted if we could make it through. Like butterflies coming out of our cocoons, the process might be long, difficult, time consuming, and sometimes painful. But my dear friends, we have proven that it’s all about determination, perseverance, and the will to survive. As the anecdote goes, this challenge of coming out of the cocoon is what makes the butterfly strong and ready for the real world because success really comes with no shortcuts; it is a culmination of hard work, perseverance, sacrifice and faith.

As we move on to the next chapter of our lives, let us not forget the people who have guarded and cared for our cocoons, they who were with us and in their own ways showed us that they care. On behalf of Class 2014, I would like to express our sincerest gratitude to our teachers and mentors, the light who have guided our paths in school and have taught us lessons that are not only for the classroom but for life as well; to our thesis advisers, for always sharing their precious time and never failing to give us golden comments, support, and inspiration; to all the staff, for making our stay in UP comfortable by attending to our various needs in the library, laboratory, canteens, and everywhere else; to our friends who have become a part of our lives and the ones who easily make a hectic day in school seem light and enjoyable; to our families, parents, relatives, and guardians, our pillars of strength and source of inspiration, thank you very much for the understanding, love, and care. Maraming salamat po for working so hard to provide us with good education and comfortable living. And lastly, to God almighty, the source of light and wisdom, that if not with him, nothing is possible.

And of course, my fellow graduates, let us not forget to thank and congratulate ourselves for a job well done! Our efforts and perseverance paid off. Here we are today! Remember the cups of coffee we drank to survive an all-nighter, the meals we skipped because we are busy doing requirements due in an hour or so, the kilometers we walked and even ran under the heat of the sun or the cold drops of the rain to and from classes? This, my dear friends, we have endured because we have a goal, a goal that materializes today as we receive our diplomas and are now graduates of UP, our dear university whom for the past years have nurtured us under her tutelage and care.

Today, being a commencement of our college days, also brings us to a greater challenge: a challenge of making a mark and propagating change in the real world. Like the butterfly that has come out of the cocoon, we are now faced with a bigger challenge of flying with a purpose in the outside world away from the protection of our cocoons, but equipped and strengthened by its provided experiences and learning. Life after college may be faced with more uncertainties as we are now on our own, away from the university rules or curriculum guides.

Kung sa enrollment pa, bahala na tayo mag enlist ng subjects, mag-addmat, at changemat, wala nang advising step o kung droppping man, wala ng advisers consent/guidance, sapagkat ngayon, nakadepende na sa atin ang lahat.

Decisions will become harder to make as what is right may not be convenient, as what is right may be tiring, and as what is right might not be easy. There may come a time that we may be tempted to shift our goals, values, priorities, and principles because of confusion and uncertainty due to the myriad of choices in the real world . Despite this, fellow graduates, I have faith that the spirit of the Oblation will burn in us and continue to inspire us in offering our lives to others for the good cause. This spirit will also help us to flutter our wings of maroon and green in upholding the values of what our dear university, UP has taught us.

Wag din nating kakalimutan na kung saan tayo ngayon ay dahil sa bayanihan ng sambayanang Pilipino. Milyon-milyong Pilipino na nag-ambag-ambag mula sa kanilang pinagsikapang kita upang tayo ay makapag aral. Ang ating buong pagkatao ay mula sa pag-aaruga at pagmamahal ng ating pamilya, kamag-anak, kaibigan, mga guro, kaklase at ng buong sambayanan. Huwag nating sayangin ang kanilang sakripisyo para sa atin. Nawa’y ating maibalik sa kanila ang kanilang ipinunla, sa pag-asang, sa pagdating ng panahon, tayo ang mga magiging haligi ng bansa sa pagbabago at pag-unlad. Nawa’y dumating ang panahon na iyon. Huwag tayong makalimot.

As we go on our separate ways, may we forever be united by one goal and one vision – to work for the good of our countrymen, contributing to the development of the country as engineers of the future, offering our craft for the greater good, upholding honor and excellence, and giving justice to the UP education that we’re privileged to experience. Let us not only be iskolars ng bayan, but also iskolars PARA sa bayan.

Congratulations to all! Maraming Salamat, UP!

Maraming salamat, UP!

Quick Notes: Running http-server with npm

Hi dear reader! 🙂

How’s your May going so far! 🙂 Hope everything is going as wonderful and exciting as you imagined your May to be! ❤

This blog post would just really be short and sweet as it is only a mini documentation on running http-server from the Node Package Manager (npm).

http-server is a quick way to have a web server run locally to serve your pages. For example, I found this useful months ago when I was playing with Angular.JS where I needed to have my Angular app run on a web server for it to run seamlessly.

I. Initializing repository with npm

To be able to install node packages locally, you can issue the following in your project’s root directory:

$ npm init

After doing this, a file named package.json  will be generated. It will contain a list of the packages you have installed for your project and their corresponding versions, if applicable.

II. Installing http-server

$ npm install http-server

After installation, you will find a generated folder named node-modules where npm have installed http-server and where your future packages will also be saved.

III. Running http-server

$ ./node_modules/.bin/http-server

or better yet, you could add this location to your PATH environment variable:

$ export PATH=./node_modules/.bin:PATH

So you can just issue the command:

$ ./http-server

Take note that what we added to our PATH environment variable is a relative path so it applies even with other projects / directories.

Doing the export alone puts the node_modules directory in your path only temporarily. For this to persist, we can put it instead in bashrc.

$ PATH=./node_modules/.bin:PATH


So yay, there! 🙂 Thank you, reader! Wishing you a great week ahead! 🙂

Restoring MongoDB from a Dump File in AWS S3

Hi everyone!

It’s been a long time already since my last blog post! *cue music: “It’s been a long time without you, my friend!“* Haha. :))

Life has been pretty fast and busy lately wooo but fun nonetheless! I was just actually from a family vacation in Palawan and it was super nice! Clear waters, sunny skies, fresh air, yummy seafood, and crisp waves humming in one’s ear. All my favorite elements combined!

Woooo, so since today is #backtowork day, I started it with preparing a golden image for our QA database.

Backing up one of our databases wasn’t as tedious before (already completing after an hour or so). But due to some major changes in data collection and recording, one of our databases became huge which also made restoring take a while.

Due to this, preparing the testing database became one of the challenges during our last QA testing session.  I started restoring the database at 6 pm and it was still creating indices at 3 am. Because of this, I plan to just create a golden database image for QA testing regularly (maybe twice every month) and use it for QA testing sessions.

So there, sorry for the long introduction part for this post! So in this blog post, we’ll walk through the steps in creating a golden image for your MongoDB database, pulling your dump from AWS S3 and setting it up in your AWS EC2 instances. 🙂

My setup includes:

  • Mongo Database
  • Database Dump in S3
  • AWS EC2 Instances.

We can divide the whole process into 5 parts:

  1. Preparing the AWS EC2 Instance
  2. Copying the Dump from S3
  3. Mounting AWS EBS storage
  4. Preparing the Copied MongoDB Dump
  5. Restoring the Copied MongoDB Dump

Before we start, let us start with the following quote:

TMUX is always a great idea!

Oftentimes, we get disconnected from our SSH connections, and sometimes unfortunately, with a running process. Oftentimes too, we want to get back to whatever our workspace was – for this purpose, we can use tools, like tmux or GNU screen, that provides session management (along with other awesome feature like screen multiplexing, etc).

I. Preparing the AWS EC2 Instance

For the first part, we will be preparing the AWS EC2 instance where we will be running Mongo where we will be restoring our database to.

A. Provisioning the AWS EC2 Instance

For this, I used an Ubuntu 14.04 server,

Screen Shot 2016-05-13 at 9.25.20 AM.png

and provisioned with 72 GB for the main memory and an additional 100 GB with an EBS volume. These sizes may be too big or too small for your setup, feel free to change them to different numbers that would suit you best.

Screen Shot 2016-05-13 at 9.25.45 AM.png

B. Installing MongoDB

i. Import MongoDB public key
$ sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 7F0CEB10
ii. Generate a file with MongoDB reposityory URL
$ echo 'deb http://downloads-distro.mongodb.org/repo/ubuntu-upstart dist 10gen' | sudo tee /etc/apt/sources.list.d/mongodb.list
iii. Refresh and update packages
$ sudo apt-get update
iv. Install MongoDB
$ sudo apt-get install -y mongodb-org

C. Operating MongoDB

Here are some useful commands on operating MongoDB.

i. Starting Mongo:
$ sudo service mongod start
ii. Checking If It is Running:
$ tail -n 500 /var/log/mongodb/mongod.log

You should see something like:

[initandlisten] waiting for connections on port 27017
iii. Stopping Mongo
$ sudo service mongod stop
iv. Restarting Mongo
$ sudo service mongod restart

II. Copying the Dump from AWS S3

If your dump in S3 is publicly available, go ahead and use wget with the url that S3 provided for your file. But in case its security settings allows it to be only viewable from certain accounts, you can use AWS CLI to copy from S3

i. Install AWS CLI
$ sudo apt-get install awscli
ii. Configure you Credentials
$ aws configure
iii. Execute the Copy Command

* Feel free to change the region to the region where your bucket is

$ aws s3 cp s3://bucket-name/path/to/file/filename /desired/destination/path --region us-west-2


III. Mounting AWS BS Storage

From I, we have provisioned our Ec2 Instance with 100GB of EBS storage, now it’s time to mount it in our EC2 instance to make it usable.

We first want to see a summary of avaialble and used disk space in our file system:

$ df -h

We can see that our 100 GB is still not part of this summary. Listing all block devices with:

$ lsblk

We get:

Screen Shot 2016-05-13 at 9.29.11 AM.png

Since this is a new EBS volume, no file system is still intact so we proceed in creating a filesystem and also mounting the volume:

i. Check and Create File System
$ sudo file -s /dev/xvdb
$ sudo mkfs -t ext4 /dev/xvdb
ii. Create, Mount, Prepare Directory
$ sudo mkdir /data
$ sudo mount /dev/xvdb /data
$ cd /data
$ sudo chmod 777 .
$ sudo chown ubuntu:ubuntu -R .

For an in-depth tutorial on attaching EBS volumes, you may check my another blogpost: Amazon EBS: Detachable Persistent Data Storage.

IV. Preparing the Copied MongoDB Dump

Once you have downloaded your dump in S3, most likely it is compressed and zipped to save space. In that case, you need to uncompress it.

If your dump file has a .tar extension, you can untar it by:

$ tar -xvf /path/to/dump/dump-filename.tar

On the other hand, if your dump file has a .tar.gz extension, you can untar-gz it by:

$ tar xvzf /path/to/dump/dump-filename.tar.gz -C desired/destination/path/name

Continue un-tarring and unzipping your files if the main dump file contains nested compressed resources.

V. Restoring the Copied MongoDB Dump

$ export LC_ALL="en_US.UTF-8"
$ mongorestore --drop --host localhost --db db_name_here path/to/the/copied/dump/filename

If you are in tmux, in case you get disconnected, you can get back to your previous workspace by:

$ tmux attach


So there, a really quick and short tutorial on how we can get our Mongo Dumps and Databases up and running. 🙂

Getting Started with Ohm: A Cheat Sheet Guide

Ohm Cheatsheet

Ohm: a library for storing data in Redis, a key-value store

Ohm Version: 2.3.0

I. Prerequisites

  • Redis
  • Ruby

II. Installation

Just install the ohm gem, fire up an irb session or write a Ruby script and require it.

gem install ohm

III. Connecting to Redis

A. Single Connection

By default, Ohm connects to Redis at localhost (, port 6379. If you wish to override this, you may set a different Redis URL with Redic, a lightweight Redis client.

require "ohm"
Ohm.redis = Redic.new("redis://<IP>:<PORT>")

# Sample: Ohm.redis = Redic.new("redis://")

B. Multiple Connections

Certain Models could connect to different Redis servers.

Ohm.redis = Redic.new(REDIS_URL1)

class Student < Ohm::Model

Student = Redic.new(REDIS_URL2)

IV. Simple Key-Value Fetch and Get

require "ohm"

Ohm.redis.call "SET", "Key", "Value"

Ohm.redis.call "GET", "Key"
# => "Value"

V. Mapping Objects to Key Value Store

A. Class Declaration

class Student < Ohm::Model

B. Classes Attributes, References, and More

 class Student < Ohm::Model
   attribute :student_number
   attribute :first_name
   attribute :last_name
   attribute :birthdate
   index :student_number

class Course < Ohm::Model
  attribute :name
  attribute :domain
  reference :classroom, :Classroom
  set :students, :Student
  counter :waitlist
  counter :drops
  index :name

class Classroom < Ohm::Model
  attribute :name
  collection :courses, :Course

i. attribute

Any value that can be stored in a string. Numbers stored would be returned as a string.

ii. set

Similar to an unordered list.

a. Adding To a Set
course = Course.create(name: 'English 1')

student1 = Student.create(first_name: 'Adelen', last_name: 'Festin')

course.students.add(Student.create(first_name: 'Victoria', last_name: 'Po'))

# => 2
b. Iterating From a Set

Elements of a set are returned like enumerables to which you can apply iterative methods.

course.students.each do |student|
  puts student.last_name
# => Festin
#    Po

# => ["Adelen", "Victoria"]
c. Removing From a Set
  • index starts at 1

Proper way:

Usage of the delete method on the set to actually remove the element from the set.


Improper way:

Deleting the object itself.

student = course.students[1]

Doing so would result to having the deleted element still part of the list

# count would still include deleted element

But accessing the set element would result to nil:


# [.., nil, ..]

iii. list

class PreenlistmentList
  reference :course, :Course
  list :students, :Student

taekwondo = Course.create(name: 'PE 2 TKD')

student1 = Student.create(student_number: '2010-00033')
student2 = Student.create(student_number: '2010-18415')
student3 = Student.create(student_number: '2010-30011')

pl = PreenlistmentList.create(course: taekwondo)

# Pushes element to end of list
pl.save # necessary for list to persist

# Places element to front of list

# Push student3 twice
# => 3

# => 4

# Deletes all occurences of student3

# 2

iv. counter

Just like a regular attribute but direct manipulation and assignment is not allowed. Can only increment or decrement.

Course[1].incr(:waitlist) # 0 + 1
# => 1

course = Course[1]
course.incr(:waitlist) # 1 + 1
# => 2

course.decr(:waitlist) # 2 - 1
# => 1

For multiple attributes, you may all increase and/or decrease in one line by separating the attributes with a comma.

course.decr(:waitlist, :drops)

v. reference

Reference to another model; similar to a foreign key.

ph108 = Classroom.create(name: 'Palma Hall, Room 108')

course = Course.create(name: 'Geog 1', classroom: ph108)

# <Classroom:0x007f8dc9f2bb70 @attributes= ...

# "Palma Hall, Room 108"

vi. collection

A shortcut accessor to search for all models that reference the current model. Returns an enumerable



VI. CRUD Operations

A. Creating Records

i. Immediate Create
course = Course.create name: 'Math 17'

# => "1"

# => "Math 17"

ii. Initialize and Save
another_course = Course.new name: 'CS 11'

# => "CS 11"


B. Reading / Looking up a record

i. By Index
course = Course[1]
# => "Math 17"
ii. By Query (id)
course = Course.find(id: 1).first
# => "Math 17"
iii. By Query (attributes)
course = Course.find(name: 'Math 17').first
# => "Math 17"

C. Updating Records

course = Course.find(name: 'Math 17').first
# => 1

course.update(name: "Math 53")
# => "Math 53"

# => 1

D.Deleting Records

i. Direct Access
ii. Prior Assignment
course = Course[1]

VII. Filtering

A. Single Attribute

Course.find(domain: 'Math')

B. Multiple Attribute

Course.find(domain: 'English', waitlist: 0)

C. Multiple Values for an Attribute

# Find all courses with waitlist count = 0 and
# has domains of either Math or English

Course.find(waitlist: 0).combine(domain: ["Math", "English"])

D. With Exceptions

# Find all courses under the Math domain except
# for courses with Math 2 as the name

Course.find(domain: "Math").except(name: "Math 2")

VIII. Indices

Adding indices to models would allow you to execute find operations on the indexed attributes.

class People < Ohm::Model
  attribute :name
  attribute :gender

  index :name


Valid find lookups:

People.find(name: 'John Doe')

Invalid find lookups:

People.find(gender: 'Female')
# => Ohm::IndexNotFound: Ohm::IndexNotFound

# To fix: add gender to the index list
# index :name, :gender

IX. Sorting

All Sets can be sorted with sort which by default sorts by ID but can be overriden when passed with the by parameter.

On Ohm Version 2.3.0, you can only sort by numeric fields else a runtime error or an unsorted output may result

A. By

Indicate the attribute to which sorting will be based on.

For accuracy, use sort_by

 courses = Course.all.sort_by(:waitlist)

 => [0, 0, 1]
B. Order
Course.all.sort_by(:waitlist, order: 'ASC').map(&:waitlist)
# => [0, 0, 1]

Course.all.sort_by(:waitlist, order: 'DESC').map(&:waitlist)
# => [1, 0, 0]

B. Limit

Course.all.sort(limit: [1, 2])
# Gets 2 entries starting from offset 1

Course.all.sort(limit: [0, 1])
# Gets the first entry

X. Uniqueness

class Room < Ohm::Model
  attribute :name
  attribute :building
  unique :name

Room.create(name: 'Rm 180', building: 'Palma Hall')
# success

Room.create(name: 'Rm 180', building: 'DCS')
# Ohm::UniqueIndexViolation: UniqueIndexViolation: name

PostgreSQL 101: Getting Started! (Part 1)


An object-relational database system

I. Installation

A. Mac OSX:

brew install postgresql

B. Ubuntu

sudo apt-get update
sudo apt-get install postgresql postgresql-contrib

II. Console Commands

A. Connecting to PostgreSQL Server

To connect to the PostgreSQL server with as user postgres:

psql -U postgres

By default, psql connects to a PostgreSQL server running on localhost at port 5432. To connect to a different port and/or host. Add the -p and -h tag:

psql -U postgres -p 12345 -h

Once in, you may navigate via the following commands:

  • \l – list databases
  • \c – change databases
  • \d – list tables
  • \df – list functions
  • \df – list functions with definitions
  • \q – quit

III. Database Creation

CREATE DATABASE < database name >;

# Creates database with name: test_db

IV. Database Drop

DROP DATABASE < database name >;

 # Drops database with name: test_db

V. Table Creation

CREATE TABLE programs(

CREATE TABLE students(
  programid INTEGER REFERENCES programs,

A. Column Data Types


B. Common Added Options


VI. CRUD Operations

A. Insertion of Rows


INSERT INTO table_name(column1, column2, column3...)
VALUES(value1, value2, value3...);


INSERT INTO programs(degree, program)
VALUES('BS', 'Computer Science');

INSERT INTO programs(degree, program)
VALUES('BS', 'Business Administration and Accountancy');

INSERT INTO students(student_number, first_name, last_name, programid)
VALUES('2010-00031', 'Juan', 'Cruz', 1);

INSERT INTO students(student_number, first_name, last_name, programid)
VALUES('2010-00032', 'Pedro', 'Santos', 2);

B. Read/Lookup of Row

i. Get All Rows

SELECT * FROM students;

ii. Get Rows Satisfying Certain Conditions

# Gets row/s with studentid = 1

SELECT * FROM students where studentid = 1;

# Gets row/s where the last_name starts with 'cru' (non case sensitive)

SELECT * FROM students where last_name ilike 'cru%';

# Gets row/s where the student_number column is either 2010-0033, '2010-30011', or '2010-18415'

SELECT * FROM students where student_number in ('2010-00033', '2010-30011', '2010-18415');

iii. Get Specific Columns from Resulting Rows

# Selects the lastname and firstname from the students table

SELECT last_name, firstname from students;

# Selects the program column from rows of the programs table satisfying the condition and then prepending the given string

SELECT 'BUSINESS PROGRAM: ' || program from programs where program ilike '%business%';

C. Update of Row

i. Update all Rows

UPDATE students SET last_name = 'Cruz';

ii. Update Rows Satisfying Conditions

UPDATE students SET last_name = 'Santos' where studentid = 1;

UPDATE programs SET degree = 'BA' where programid NOT IN (2);

D. Deletion of Row

i. Delete all Rows

 DELETE FROM students

ii. Delete Rows Satisfying Conditions

DELETE FROM students WHERE studentid NOT IN (1,2)

VII. Queries

A. Joins

i. Inner Join

SELECT * FROM table_1 JOIN table_2 using (common_column_name);
SELECT student_number, program FROM students JOIN programs using (programid);

ii. Left Join

SELECT * FROM table_1 LEFT JOIN table_2 on table_1.column_name = table_2.column_name;

We insert a student row without a program

INSERT INTO students(student_number, first_name, last_name)
VALUES('2010-35007', 'Juana', 'Change');

Doing a left join would still return the recently inserted row but with empty Programs-related fields.

SELECT * FROM students LEFT join programs on students.programid = programs.programid;

iii. Right Join

SELECT * FROM table_1 RIGHT JOIN table_2 on table_1.column_name = table_2.column_name;

We insert a program row without any students attached

INSERT INTO programs(degree, program)
VALUES('BS', 'Information Technology');

Doing a right join would still return the recently inserted row but with empty Students-related fields.

SELECT * FROM students RIGHT join programs on students.programid = programs.programid;


Specify conditions by which rows from the query will be filtered.

SELECT * from students where programid IS NOT NULL;

C. Group By

Allows use of aggregate functions with the attributes provided to the GROUP BY clause as basis for aggregations

SELECT program, COUNT(*) FROM students
JOIN programs USING (programid) GROUP BY program;

Above example counts students per program.

D. Having

Similar to WHERE but applies the condition to the groups produced with GROUP BY.

SELECT program, COUNT(*) FROM students
JOIN programs USING (programid) GROUP BY program HAVING COUNT(*) > 1;

E. Union

Joins resulting datasets from multiple queries.

select * from students where programid in (1, 2)


select * from students;