Amazon auto scaling and Meteor

I wrote this post like a tutorial, just to document the process, but the real intention is to share what I did to create an auto-scaling infrastructure using Amazon EC2 and maybe to receive some ideas, advice and feedbacks, so.

I’m assuming here that you already know how to Launch EC2 Instances.

A few weeks ago I was in charged of build and test an infrastructure capable to auto scale using amazon tools to be used on our next meteor project, this is what this post is about, I’ll show you what I did so far.

 

Configuring MongoDB Replica Set

The first thing that I did was create 2 instances for MongoDB and configured a replica set.

To do that, create a new instance for the primary MongoDB, go to Instances and just launch a new one.

After finish, connect to this new instance to set up the initial replica set configuration.

The first thing is to export LC_ALL variable, write on .bashrc.

export LC_ALL=C

Now, install MongoDB following the mongo documentation.

http://docs.mongodb.org/manual/tutorial/install-mongodb-on-ubuntu/

After that, stop MongoDB instance.

sudo service mongod stop

Create the database directories ( it’s possible to change that, just use the –dbpath parameter ).

sudo mkdir -p /data/db
sudo mongod --replSet "rs0" --smallfiles &
disown

Connect to the mongo shell to initiate the replica set.

mongo
rs.initiate()
conf = rs.conf()
conf.members[0].priority = 10
rs.reconfig(conf)

At this time, this instance became the PRIMARY one

Image title

Now create the oplog user.

use admin
db.createUser({user: "oplogger", pwd: "YOUR-PASS", roles: [{role: "read", db: "local"}]});

Fine! Your PRIMARY replica set is ready!

Secondary instances

Now, launch another instance just like the previously one, connect with and repeat the initial setup.

Install MongoDB following the mongo documentation

http://docs.mongodb.org/manual/tutorial/install-mongodb-on-ubuntu/

Stop MongoDB instance.

sudo service mongod stop

Create the database directories.

sudo mkdir -p /data/db
sudo mongod --replSet "rs0" --smallfiles &
disown

Now, on PRIMARY Server add this mongo to replica set members list.

mongo
rs.add("SECONDARY INSTANCE IP")
conf = rs.conf()
conf.members[1].priority = 5
rs.reconfig(conf)

 

Create an auto-updatable image

Now that you have your replica set, it’s time to create your auto-updatable image to use on your auto-scaling infrastructure.

Once again launch an instance, could be a tiny one, for example, I’ve used t2.small. When you finish, connect to your instance and edit the .bashrc file exporting LC_ALL.

export LC_ALL=C

The next step is install NodeJS and NPM.

sudo apt-get update
sudo apt-get install nodejs
sudo apt-get install npm

Ok, now let’s talk about the strategy, this environment will became an instance image, so, when requested, Amazon’s auto scaling will launch other instances like that dynamically, but what happens if the application is different of the application used to create the image? Nothing, because an image is something static, the instances that auto scaling will launch will be just like the instance used to create the image, so nothing will happen but your app will be outdated.

To solve that, we can configure the instance do pull the code from the repository, build the app and deploy it during the startup.

If you’ve read and used A successful Git branching model you know that it’s safe doing something like that because in this model we use other branches different of master to grow the app and master, contains just the code ready to production, in other words, in master we can trust.

So, let’s configure this instance to auto update itself during the boot. I used the ubuntu’s home to do everything, host my sources, my scripts and to build the app as well, it’s looking like that.

Image title

How you can see in this image I’ve created two scripts, env.sh to configure all environments variables necessary to run a Meteor app and update-instance.sh to checkout sources from repo, build the app and start NodeJS.

Let’s take a look at env.sh

export PORT=3000
export ROOT_URL=http://redpass.portaltecsinapse.com.br/
export MONGO_URL=mongodb://172.31.19.238:27017/redpass
export MONGO_OPLOG_URL=mongodb://oplogger:PASSWORD@172.31.19.238:27017/local?authSource=admin
export DISABLE_WEBSOCKETS=1
export METEOR_SETTINGS=$(cat /home/ubuntu/redpass/settings.json)
ip=$(curl http://169.254.169.254/latest/meta-data/public-ipv4)
export CLUSTER_ENDPOINT_URL=http://$ip:3000

Exporting this environment variables, our Meteor app has all the necessary information to run in cluster and use the replica set that we’ve configured before.

The next and the biggest one is update-instance.sh, let’s see it.

#!/bin/bash

gyp_rebuild_inside_node_modules () {
  for npmModule in ./*; do
    cd $npmModule

    isBinaryModule="no"
    check_for_binary_modules () {
      if [ -f binding.gyp ]; then
        isBinaryModule="yes"
      fi

      if [ $isBinaryModule != "yes" ]; then
        if [ -d ./node_modules ]; then
          cd ./node_modules
          for module in ./*; do
            cd $module
            check_for_binary_modules
            cd ..
          done
          cd ../
        fi
      fi
    }

    check_for_binary_modules

    if [ $isBinaryModule == "yes" ]; then
      rm -rf node_modules
      if [ -f binding.gyp ]; then
        sudo npm install
        sudo node-gyp rebuild || :
      else
        sudo npm install
      fi
    fi
    cd ..
  done
}

rebuild_binary_npm_modules () {
  for package in ./*; do
    if [ -d $package/node_modules ]; then
      cd $package/node_modules
        gyp_rebuild_inside_node_modules
      cd ../../
    elif [ -d $package/main/node_module ]; then
      cd $package/node_modules
        gyp_rebuild_inside_node_modules
      cd ../../../
    fi
  done
}

cd /home/ubuntu/redpass
sudo rm -fr /home/ubuntu/build
sudo rm -fr node_modules
git fetch --all
git reset --hard origin/master
git pull
meteor remove-platform android
meteor remove-platform ios
meteor remove-platform firefoxos
meteor build /home/ubuntu/build --directory

cd /home/ubuntu/build/bundle/programs/server
if [ -d ./npm ]; then
  cd npm
  rebuild_binary_npm_modules
  cd ../
fi
if [ -d ./node_modules ]; then
  cd ./node_modules
  gyp_rebuild_inside_node_modules
  cd ../
fi
if [ -f package.json ]; then
  sudo npm install
else
  sudo npm install fibers
  sudo npm install bcrypt
fi

sudo stop redpass
sudo killall node
sudo rm -fr /opt/redpass/app/*
sudo cp -fr /home/ubuntu/build/bundle/* /opt/redpass/app/
sudo chmod -R +x /opt/redpass/app/*
sudo chown -R meteoruser /opt/redpass/app
source /home/ubuntu/env.sh
node /opt/redpass/app/main.js

exit 0

I’ll not dive in details here because if for some reason you decide to use it, all you have to do is customise the last block, the lines below “sudo killall node”, everything else I copied from MUP.

Nice! Now to run it at startup, it’s just to include it on you /etc/rc.local

sudo -H -u ubuntu /home/ubuntu/update-instance.sh

Finally, on the list, right click over this instance to create an image

Image title

Fill the Image name field and click on Create Image button

Now when new instances are created based on this image, they’ll be updated at startup.

 

Create a Launch Configuration

Go to Auto Scaling, Auto Scaling Groups. It´s on the bottom.

Image title

Now click on Create Auto Scaling group button and then Create launch configuration.

The first step (Choose AMI) is choose your image, just click on My AMIs and select it.

Image title

On the next screen, select the size of the machines that auto scaling will launch, I kept t2.small, 1 processor with 2Gb of memory.

Image title

Next, you’ll Configure details.

Under Advanced Details, select an IP address type. If you want to connect to an instance in a VPC, you must select an option that assigns a public IP address. If you want to connect to your instance but aren’t sure whether you have a default VPC, select Assign a public IP address to every instance.

It was what I did, used a public IP address for every instance.

Now click on Skip to review button and then Create launch configuration button.

You’ll see a modal Select an existing key pair or create a new key pair, you decide.

Click on Create launch configuration and wait.

 

Create an Auto Scaling Group

On the Configure Auto Scaling group details you have to fill the Group Name, Group size (I set up for 1) and a Subnet (just click too see your options). Click on Next: Configure scaling policies button.

Now select Use scaling policies to adjust the capacity of this group and configure according to your needs. Eg

Image title

Image title

Click on Review, Create Auto Scaling group and Close

 

Checking Your Auto Scaling

Now, connect to your meteor instance,

And install nmon to monitor your instance

sudo apt-get install nmon

Open another terminal, connect to your server again and stress your cpu, just run the following line

dd if=/dev/zero of=/dev/null

sudo apt-get install stress
stress -c 1

And monitor with nmon

Image title

After while you’ll see new instances initializing

Image title

 

Conclusion

This infrastructure is working pretty well for while( updates ) and responding fast during our tests( performance ). The first version of this app will be delivered to the real world in the end of august and then we’ll see how good it is.

Any ideas and suggestions are very welcome.

Allan de Queiroz

Allan de Queiroz
London based software engineer

XServer forward from Linux text mode for Headless purposes.

Hello, this post is about XServer forward from Linux text mode, **not ssh forward, anything related to VNC** or things like that.Recently...… Continue reading