Rackspace vs Amazon impressions

I’ve been a customer of Amazon Web Services since they have been in beta. I’ve also worked with Rackspace now and then. Lately I’ve been working extensively with Rackspace infrastructure so I have a better understanding of their products. Bellow there are a few things that bother me at Rackspace:


The thing you are going to use 99% of the time with any of the providers. Amazon offers normal instances, high cpu instances, high memory instances, huge memory and cpu instances. On Rackspace you will have a very limited offer of instances. No high cpu or high memory. They do have plans to add them in nearby future. Until then, if you have a CPU bottleneck then it’s tough luck: scale up the instance and pay double.

And since we are talking about pricing for instances, Amazon is cheaper on the long run due to Reserved Instances (you pay an upfront fee and lower price/hour for the instance after). Amazon has been cutting down prices several times. Rackspace? Never.

Block storage

Called EBS on Amazon or CBS on Rackspace, it’s the preferred way of adding extra space to running instances without upgrading them. Beware that on Rackspace it takes an horrible amount of time to do a snapshot. I had to contact RS support several times because of this.

MySQL instances

Called RDS on Amazon or Cloud Database on Rackspace, they are basically optimized instances that run MySQL, but you are not given SSH access to them. Here Rackspace has the worst offer:
no scheduled backups (RS recommends mysqldump … lol try to do mysqldump for 10GB+ of data)
no replication (really? I couldn’t replicate my database)
no hotspare (Amazon MultiA-Z)

The good thing about Cloud Database is that they are fast. Really fast. Also RS promised to address all the above things in the nearby future (yeah).

Load Balancing

Rackspace does not support LB inside private network. Come on RackSpace, do your homework. This thing has been working for AWS since ancient history.


There are many other things that need to be discovered, probably not pleasant ones. So far I am disappointed with Rackspace and what it has to offer and I would recommend to any customer to use Amazon instead.

Installing memcached with repcached patch for HA memcache cluster


Repcached is an interesting patch for memcached which allows replication between 2 memcached nodes (servers). The purpose of this article is to setup 2 memcached servers that replicate each others.

Note: This article is specifically written for Ubuntu 12.04 and memcached version 1.4.13. It may or it may not work for other versions.


2x 512MB RackSpace instances called node01(IP and node02(IP

Patch for memcached v1.4.13:


Prepare to build the package:

apt-get build-dep memcached
apt-get source memcached
cd memcached-1.4.13
patch -p1 -i repcached-2.3.1-1.4.13.patch

Now edit the file debian/rules and look for config.status and add –enable-replication like this:

config.status: configure
        CFLAGS="$(CFLAGS)" ./configure --host=$(DEB_HOST_GNU_TYPE) 

Now build the package:

dpkg-buildpackage -us -uc -nc
cd ..

You should see a package named memcached_1.4.13-0ubuntu2_amd64.deb. Copy this file on both of your memcached servers and install it using this command:

dpkg -i memcached_1.4.13-0ubuntu2_amd64.deb


You are almost done now. Kill all running memcached processes on both nodes:

killall -9 memcached
ps aux | grep memcached

On node01 do the following things:

cp /etc/memcached.conf /etc/memcached_server1.conf

Edit /etc/memcached_server1.conf and replace the line:




and add at the end:


Start memcached:

service memcached start

On node02 do something similar:

cp /etc/memcached.conf /etc/memcached_server2.conf

Edit /etc/memcached_server2.conf and replace the line:




and add at the end:


Start memcached:

service memcached start


From node01:

telnet 11211:
Escape character is '^]'.
get hello
set hello 0 0 4
get hello
VALUE hello 0 4

From node02:

telnet 11211:
Escape character is '^]'.
get hello
VALUE hello 0 4

How to setup Galera 3 node cluster on Ubuntu 12.04

Galera is a multi-master replication solution for MySQL, which provides an interesting alternative to the standard master-master MySQL replication we are all so used with. One main advantage of Galera is the ability of doing sync replication, thus reducing the risk of data inconsistency between masters.

Setup on RackSpace Cloud

3x 512MB RAM instances, with 20GB storage space
1x Load Balancer for MySQL, RoundRobin algorithm, Health check enabled
1x 512MB RAM instance for testing
OS: Ubuntu 12.04 LTS 64bit


Quickly setup a Galera cluster and run some benchmarks using sysbench.

Note: For the sake of simplicity I will refer to the Galera instances as node01, node02 and node03. The test instance will be referred as test01.

Common settings on all nodes

On every node execute:

  1. An apt-get update and upgrade to bring the instances up to date.
  2. Install required packages
    apt-get install libaio1 libssl0.9.8 mysql-client libdbd-mysql-perl libdbi-perl
  3. Download Galera wsrep provider
    dpkg -i galera-23.2.4-amd64.deb
  4. Download MySQL server with wsrep patch
    dpkg -i mysql-server-wsrep-5.5.28-23.7-amd64.deb
  5. I had some issues and I had to create /var/log/mysql
    mkdir -pv /var/log/mysql
    chown mysql:mysql -R /var/log/mysql
  6. Secure the mysql installation and assign a good password to root user:
    service mysql restart
  7. Create an user for galera nodes to use for connect/replication
    mysql -p
    mysql> grant all privileges on *.* to galera@'%' identified by 'password';
    Query OK, 0 rows affected (0.00 sec)
    mysql> flush privileges;
    Query OK, 0 rows affected (0.00 sec)
    mysql> set global max_connect_errors = 10000;
    Query OK, 0 rows affected (0.01 sec)
  8. Edit /etc/hosts and make sure you add all the nodes and their corresponding IPs

Galera setup for each node

Edit the /etc/mysql/conf.d/wsrep.cnf and change the values for the following variables:

Configuration for node01:


Configuration for node02:


Configuration for node03:


Testing the setup

Now restart mysql on all the nodes and check out if cluster is working:

service mysql restart
mysql -p
mysql> show status like 'wsrep%';
| Variable_name | Value |
| wsrep_cluster_size | 3 |
| wsrep_ready | ON |

One more thing before you are done:
Edit node01 wsrep_cluster_address=”gcomm://node3:4567″ and restart mysql server.

Benchmarks were performed from test01 instance using sysbench 0.5 OLTP read-only complex test:

sysbench OLTP (ro) Galera cluster transactions vs threads


sysbench OLTP (ro) Galera cluster avg response time
ThreadsAvg response timeMin response timeAprox 95%


Benchmark Galera cluster vs MySQL master-master on RackSpace


Before starting this I would like to point out that I have compared 2 instances(master-master) vs 3 instances(galera cluster) so the test is not correct/accurate. It’s more of a “what if I switch from master-master replication to 3 nodes galera”.

MySQL Master-Master replication:

2x 512 MB instances with 20GB of storage, Ubuntu 12.04 64bit, mysql-server 5.5 was used with no optimization changes to my.cnf, except the required changes for master-master replication.
1x LoadBalancer, RoundRobin algorithm

Galera 3 nodes cluster:

3x 512 MB instances with 20GB of storage, Ubuntu 12.04 64bit, mysql-server 5.5 from galera was used, with no changes to my.cnf, only required node changes were made wsrep.cnf.
1x LoadBalancer, RoundRobin algorithm

Test instance:

1x 512MB instance with 20GB of storage, Ubuntu 12.04 64bit running sysbench

sysbench --test=oltp --mysql-host=loadbalancer_ip --mysql-user=root --mysql-password=password--oltp-table-size=1000000 prepare

The tests were performed on a database of about 256MB size, InnoDB table(s). No optimization changes were made to default my.cnf files, except the required to setup replication.

sysbench OLTP transactions per second
TestMaster-MasterSingle nodeGalera cluster
1 thread,3m10.9717.1112
16 threads,1m, rw1541400
16 threads,1m, r only217158.7206
32 threads,1m, r only325160.79375


As you can see from the table and graph I had some issues performing sysbench for Galera cluster in rw mode for 16 threads. From what I have found on Internet it’s an issue with sysbench 0.4.12 so I will attempt to rerun the tests with a newer version.

Installing Scalr 3.5 Open Source on Ubuntu 12.04

This is an update on an older post of mine, one of my first articles regarding cloud computing. Much has changed since since 2008 when I have wrote this article “How to install Scalr on Ubuntu 8.10 EC2 Instance“.

For example the Ubuntu has evolved to 12.04 LTS (I am using LTS 64bit for this howto) and Scalr is now version 3.5. One thing didn’t change: it’s still a royal PITA to get Scalr open source working. Hopefully this howto will help you to install Scalr on your server. It doesn’t cover operating Scalr and other things, which I will address in future posts, if there is enough interest.

After you have installed Ubuntu 12.04 64bit server edition to your server or virtual machine the way you like it it’s time to start the update process:

apt-get update && apt-get upgrade

Now you are ready to run tasksel and select the following roles for your server: OpenSSH, DNS server, LAMP server

You will need to install some dev packages before going anything else:

apt-get install libcurl4-gnutls-dev make librrd-dev

Now it’s time for PHP5 related extensions:

apt-get install php5-curl php-gettext php-net-socket php5-mcrypt php-xml-serializer libssh2-php php-soap php5-snmp php5-rrd
pecl install pecl_http
echo "" >/etc/php5/conf.d/pecl_http.ini
pecl install rrd
echo "" >/etc/php5/conf.d/rrd.ini

Time to get Scalr code:

cd /tmp
tar zxvf scalr35
cd scalr-3.5.r7704
cp -r app /var/www/
chown -R www-data:www-data /var/www/app

Create new database and import sql from sql/scalr:

mysql -p
mysql> CREATE DATABASE scalr CHARACTER SET latin1 COLLATE latin1_swedish_ci;
mysql> grant all privileges on scalr.* to scalr@localhost identified by 'password';
mysql> flush privileges;
mysql> quit
mysql -p scalr <sql/scalr.sql

While doing that import I’ve got a nice error:
ERROR 1054 (42S22) at line 2222: Unknown column ‘architecture’ in ‘field list’
1) Drop database
2) Search sql/scalr.sql for “CREATE TABLE IF NOT EXISTS `role_images`” and add after platform:

`architecture` varchar(25) DEFAULT NULL,
`os_family` varchar(25) DEFAULT NULL,
`os_name` varchar(25) DEFAULT NULL,
`os_version` varchar(25) DEFAULT NULL,
`agent_version` varchar(25) DEFAULT NULL,


Configuration of Scalr is quite simple:

cd /var/www/app/etc
cp config.ini-sample config.ini
edit config.ini

Cron jobs required by Scalr? Just type crontab -e and add the following lines:

*/2 * * * * /usr/bin/php -q /var/www/app/cron-ng/cron.php --Poller
* * * * * /usr/bin/php -q /var/www/app/cron/cron.php --Scheduler2
*/10 * * * * /usr/bin/php -q /var/www/app/cron/cron.php --MySQLMaintenance
* * * * * /usr/bin/php -q /var/www/app/cron/cron.php --DNSManagerPoll
17 5 * * * /usr/bin/php -q /var/www/app/cron/cron.php --RotateLogs
*/2 * * * * /usr/bin/php -q /var/www/app/cron/cron.php --EBSManager
*/20 * * * * /usr/bin/php -q /var/www/app/cron/cron.php --RolesQueue
*/5 * * * * /usr/bin/php -q /var/www/app/cron-ng/cron.php --DbMsrMaintenance
*/2 * * * * /usr/bin/php -q /var/www/app/cron-ng/cron.php --Scaling
*/5 * * * * /usr/bin/php -q /var/www/app/cron/cron.php --DBQueueEvent
*/2 * * * * /usr/bin/php -q /var/www/app/cron/cron.php --SzrMessaging
*/4 * * * * /usr/bin/php -q /var/www/app/cron/cron.php --RDSMaintenance
*/2 * * * * /usr/bin/php -q /var/www/app/cron/cron.php --BundleTasksManager
* * * * * /usr/bin/php -q /var/www/app/cron-ng/cron.php --ScalarizrMessaging
* * * * * /usr/bin/php -q /var/www/app/cron-ng/cron.php --MessagingQueue
*/2 * * * * /usr/bin/php -q /var/www/app/cron-ng/cron.php --DeployManager
*/2 * * * * /usr/bin/php -q /var/www/app/cron/cron.php --UsageStatsPoller
* * * * * root /usr/bin/php -q /var/www/app/cron-ng/cron.php --SNMPStatsPoller

Time to add a Virtual Host:

cat <<EOF> /etc/apache2/sites-available/scalr
<VirtualHost *:80>
DocumentRoot "/var/www/app/www"

<Directory "/var/www/app/www">
Options Indexes FollowSymLinks
AllowOverride All
Order allow,deny
Allow from all

Enable required Apache modules and site and restart everything:

a2ensite scalr
a2enmod rewrite
service apache2 restart

DNS managed by bind9:

chmod g+w /etc/bind/named.conf
echo 'include "/var/named/etc/namedb/client_zones/zones.include";' >> /etc/bind/named.conf
mkdir -p /var/named/etc/namedb/client_zones
chown root.bind /var/named/etc/namedb/client_zones
chmod 2775 /var/named/etc/namedb/client_zones
echo ' ' > /var/named/etc/namedb/client_zones/zones.include
chown root.bind /var/named/etc/namedb/client_zones/zones.include
chmod g+w /var/named/etc/namedb/client_zones/zones.include

To get rid of nasty AppArmor warnings and errors edit /etc/apparmor.d/usr.sbin.named and add:

/var/named/etc/namedb/client_zones/zones.include rw,

And finish it by restarting AppArmor and bind9:

service apparmor restart
service bind9 restart

Open your browser and go to Default username/password: admin/admin.

If you have issues or you need more info please feel free to comment 🙂

Apache2 worker vs prefork for ISPConfig benchmark

I’ve been running ISPConfig latest version(3.0.4) on Amazon cloud t1.micro instance for some time to host several small sites, mostly WordPress. I’m quite happy with the performance of the instance. The OS is Ubuntu 10.04 LTS. Until recently I’ve used the default mpm which is prefork, but I decided to test out worker also. If you are wondering I use mod_fcgid for all the sites. That being said I performed several tests with ab (apache benchmark) to see which mpm can server most requests per second.

While I do not claim this is the best setup, I think worker is better suited for me. Some people said they had problems because of mpm worker. So far so good, but will update this post if there are any issues.

Test results:

Concurrency Level: 32
Time taken for tests: 7.834 seconds
Complete requests: 5000
Failed requests: 0
Write errors: 0
Keep-Alive requests: 4972
Total transferred: 84831033 bytes
HTML transferred: 83206915 bytes
Requests per second: 638.27 [#/sec] (mean)
Time per request: 50.136 [ms] (mean)
Time per request: 1.567 [ms] (mean, across all concurrent requests)
Transfer rate: 10575.21 [Kbytes/sec] received
Concurrency Level: 32
Time taken for tests: 7.096 seconds
Complete requests: 5000
Failed requests: 0
Write errors: 0
Keep-Alive requests: 4968
Total transferred: 84877824 bytes
HTML transferred: 83247322 bytes
Requests per second: 704.63 [#/sec] (mean)
Time per request: 45.414 [ms] (mean)
Time per request: 1.419 [ms] (mean, across all concurrent requests)
Transfer rate: 11681.17 [Kbytes/sec] received

Creating consistent backups for EBS with EXT4 and quota

What’s this about?
Data security and backups are very important aspects when you work with servers, especially if you are using cloud infrastructure. I am using AWS(Amazon Web Services) as my preferred IaaS, so the following how-to is tailored for Amazon EC2 instances using EBS as storage for the web sites files. On my instance I have Ubuntu 10.04 LTS installed and on top of it I run ISPConfig 3.0.4(latest version at the moment I write this article). Some of the programs required to run this setup were already installed, but it should be pretty obvious if you miss anything. If you need help you can either leave a comment or contact me via email.

The following setup will allow you to create an EBS using EXT4 as file system, with quota enabled on it(for ISPConfig) and weekly backups of the EBS. In case of instance failure you should be able to launch a new instance and attach the EBS, without losing any web sites files. In case of EBS failure you can recreate one from the most recent snapshot.

Create an EBS in the same zone as your instance and attach it to your instance as /dev/sdf. This can be easily done from AWS Management Console.

Install xfsprogs

sudo apt-get install xfsprogs

Create EXT4 filesystem on /dev/sdf

sudo mkfs.ext4 /dev/sdf

Now mount it temporarily

sudo mkdir /mnt/ebs
sudo mount /dev/sdf /mnt/ebs

Stop the apache2 web server and copy the files to /mnt/ebs

sudo service apache2 stop
cd /mnt/ebs
sudo cp -rp /var/www/* .

Prepare quota

touch quota.user
sudo chmod 600 quota.*

Add the entry to /etc/fstab

/dev/sdf /var/www ext4 noatime,nobootwait,usrjquota=quota.user,,jqfmt=vfsv0 0 0

Unmount the EBS and remount it to /var/www

sudo umount /dev/sdf
sudo mount /dev/sdf /var/www -o noatime,usrjquota=quota.user,,jqfmt=vfsv0

Enable quota

sudo quotacheck -avugm
sudo quotaon -avug

Start the apache2 web server and check that the web sites are working properly

sudo service apache2 start

Install ec2-consistent-snapshot script for weekly backups of EBS

sudo add-apt-repository ppa:alestic
sudo apt-get update
sudo apt-get install -y ec2-consistent-snapshot

Prepare first snapshot(I assume the cron will run as root user, hence I create the awssecret file in /root directory)

sudo touch /root/.awssecret
sudo chmod 600 /root/.awssecret

Edit .awssecret and add following lines, in this order, replacing ACCESS_KEY_ID and SECRET_ACCESS_KEY with your own, both can be found under Account->Security Credentials:


Test the snapshot creation with debug mode activated, replace VOLUME_ID with the right volume ID:

sudo ec2-consistent-snapshot --debug --description "snapshot $(date +%Y-%m-%d-%H:%M:%S)" --freeze-filesystem /var/www vol-VOLUME_ID

If everything went well you should be able to see your new snapshot in the AWS Management Console.

Finally add this to your root crontab (by running sudo crontab -e):

@weekly /usr/bin/ec2-consistent-snapshot --debug --description "snapshot $(date +'%Y-%m-%d %H:%M:%S')" --freeze-filesystem /var/www vol-VOLUME_ID>>/var/log/backup.log 2>&1

Make sure you put the correct VOLUME_ID!

This should be all, you now have all your web sites on EBS, quota is enabled and weekly backups enabled. I think I pretty much nailed everything you need in order to perform this setup, but if there are any issues feel free to leave a comment. Also I love getting feedback so if you found this article useful leave a comment also 🙂

Amazon RDS SUPER privileges

#1419 – You do not have the SUPER privilege and binary logging is enabled (you *might* want to use the less safe log_bin_trust_function_creators variable

This error occurs sometimes on RDS instances when you try to use procedures. You will soon find out that grant super privilege for a user won’t work. So the only way to make things work is to set log_bin_trust_function_creators to 1.

RDS console available at allows you to create a new group and modify its parameters. Log in to RDS console, go to “DB Parameters Groups” and click the “Create DB Parameter Group”. Set the following

  • DB Parameter Group Family: mysql5.1
  • DB Parameter Group Name: mygroup
  • Description: mygroup

Confirm by clicking “Yes, create” button.

Here comes the ugly part, since you cannot edit from the console the parameters (for the moment, I hope they are going to change that). You will need to log to your instance using SSH and download RDS cli from here:

To do so right click on “Download” button and copy link location. In the SSH window use wget to download and unzip it:

wget ""

If you don’t have unzip you can quickly get it using “apt-get install unzip”(for ubuntu) or “yum install unzip”(for centos). Of course you will need root privileges.

After successfully unpacking the RDSCli cd to that directory and set a few variables. Following is an example on Ubuntu 10.04:

cd RDSCli-1.4.006
export AWS_RDS_HOME="/home/ubuntu/RDSCli-1.4.006"
export JAVA_HOME="/usr/lib/jvm/java-6-sun"
cd bin
./rds --help

If rds –help outputs no errors then you have set it correctly. Congrats. One more command:

./rds-modify-db-parameter-group mygroup --parameters="name=log_bin_trust_function_creators, value=on, method=immediate" --I="YOUR_AWS_ACCESS_KEY_ID" --S="YOUR_AWS_SECRET_ACCESS_KEY"

The AWS keys can be obtain from your AWS account Security Credentials->Access Credentials->Access Keys.

Go to AWS RDS console, “DB Instances”, select your instance and right click “Modify”. Set “DB Parameter group” to “mygroup” and check “Apply Immediately”. Confirm with “Yes, modify”.

You are done 🙂

Apple announces iCloud

Warning: offensive language! If you are easily offended stop reading here.

What’s iCloud? It’s Apple’s idea about cloud computing

iCloud stores your music, photos, apps, calendars, documents, and more. And wirelessly pushes them to all your devices — automatically. It’s the easiest way to manage your content. Because now you don’t have to. 

To be honest it sounds more like Dropbox to me than a serious cloud service like Amazon S3. Nothing innovative, just a lot of marketing as usual. Get an existing idea, put an “i” in front of it and let Steve Jobs present it as the new cool thing. You will get a ton of hype. I know “cloud” is a cool word and very used nowadays but really don’t keep using it for every little s**t.

Looking forward for iF**k, the new Apple’s idea that will let people interact together in a more personal way and will let them share the joy to millions of viewers via iPhone/iPad/iDevice.

Mysql benchmark: RDS vs EC2 performance

the setup: 1 m1.small ec2 instance vs 1 db.m1.small rds instance, tests are being run from the m1.small instance. The goal is to determine how the site will perform when moving the database from localhost to a remote instance.

I used sysbench for mysql benchmarks. On a linux server running ubuntu 10.04 you can simply install it with the following command(it’s obvious but just in case):

sudo apt-get install sysbench

The first tests performed were m1.small EC2 instance running mysql-server 5.1.41-3ubuntu12.8 VS RDS instance type db.m1.small running mysql server 5.1.50. The test database had been set to 10 000 records, number of threads = 1, test oltp.

sysbench --test=oltp --mysql-user=root --mysql-password=password --max-time=180 --max-requests=0 prepare
sysbench --test=oltp --mysql-user=root --mysql-password=password --max-time=180 --max-requests=0 run

The results

m1.small EC2 instancedb.m1.small RDS instance
OLTP test statistics:
queries performed:
read: 263354
write: 94055
other: 37622
total: 395031
transactions: 18811 (104.50 per sec.)
deadlocks: 0 (0.00 per sec.)
read/write requests: 357409 (1985.56 per sec.)
other operations: 37622 (209.01 per sec.)
Test execution summary:
total time: 180.0044s
total number of events: 18811
total time taken by event execution: 179.7827
per-request statistics:
min: 4.04ms
avg: 9.56ms
max: 616.04ms
approx. 95 percentile: 38.42ms
OLTP test statistics:
queries performed:
read: 188230
write: 67225
other: 26890
total: 282345
transactions: 13445 (74.67 per sec.)
deadlocks: 0 (0.00 per sec.)
read/write requests: 255455 (1418.74 per sec.)
other operations: 26890 (149.34 per sec.)
Test execution summary:
total time: 180.0573s
total number of events: 13445
total time taken by event execution: 179.9174
per-request statistics:
min: 9.08ms
avg: 13.38ms
max: 904.58ms
approx. 95 percentile: 20.99ms

As you can see the EC2 can perform 40% more transactions than the RDS instance. Nothing unexpected so far.

Time to move on and increase the number of threads to 10

m1.small EC2 instancedb.m1.small RDS instance
OLTP test statistics:
queries performed:
read: 264866
write: 94545
other: 37818
total: 397229
transactions: 18899 (104.97 per sec.)
deadlocks: 20 (0.11 per sec.)
read/write requests: 359411 (1996.22 per sec.)
other operations: 37818 (210.05 per sec.)

Test execution summary:
total time: 180.0462s
total number of events: 18899
total time taken by event execution: 1799.9289
per-request statistics:
min: 4.08ms
avg: 95.24ms
max: 2620.70ms
approx. 95 percentile: 445.91ms

OLTP test statistics:
queries performed:
read: 343812
write: 122772
other: 49109
total: 515693
transactions: 24551 (136.18 per sec.)
deadlocks: 7 (0.04 per sec.)
read/write requests: 466584 (2588.13 per sec.)
other operations: 49109 (272.41 per sec.)

Test execution summary:
total time: 180.2788s
total number of events: 24551
total time taken by event execution: 1801.8298
per-request statistics:
min: 13.41ms
avg: 73.39ms
max: 1126.02ms
approx. 95 percentile: 143.83ms

In this test the small RDS instance is faster than the EC2, 136 vs 105 transactions per second. I’ve also benchmarked a large RDS instance (the next one available after db.m1.small) and it got 185 transactions per second. Quite good, but the price is 4x higher.

The next test was performed vs a 10 million records, 16 threads. This time I only benchmarked a small and a large RDS instance. The large instance managed to do 228 transactions per second while the small one got a decent score of 127 transactions. One thing I noticed during this test is that the small instance started to use it’s swap, while the large one did not have this issue. This is probably due to the fact that 10M records db is aprox 2.5GB and the small RDS only has 1.7GB of RAM.

So if you are planing to grow and want an easy way to do it, switching your database to its own RDS is one of the first things you should consider. One of the immediate effects you will notice is that the CPU usage on the EC2 instance will be greatly reduced, leaving more power for the web server. You can easily increase the size and capacity of the RDS instance with just a few clicks. The backups are done automatically, which is great considering how many times I had to recover databases.