Installing MongoDB PECL extension

The MongoDB PECL extension has not been installed or enabled

If you have installed MongoDB and you get the above error or something similar to it, you will need to install the php extension. It’s quite easy, shouldn’t take more than a couple of minutes. All the commands were executed as root, if you want to use the sudo mechanism, just prefix all the commands with sudo.

Install the required packages

apt-get install php-pear php5-dev make

If everything went ok, simply install the extension by executing this command:

pecl install mongo

Activate the MongoDB extension

I have Ubuntu 12.04 server edition installed on the server so I simply added a new ini file containing one line:

echo "" > /etc/php5/apache2/conf.d/mongo.ini

Now restart web server (in my case Apache 2.2) and enjoy:

service apache2 restart

mod_fcgid: HTTP request length exceeds MaxRequestLen

When trying to upload a file you get “Error 500, Internal server error”. In the error log file you get something like:

[Tue Aug 21 20:40:39 2012] [warn] [client x.x.x.x] mod_fcgid: HTTP request length 132532 (so far) exceeds MaxRequestLen (131072), referer:

This problem is present in ISPConfig 3 running on Ubuntu 12.04, when running Apache2 with mod_fcgid.

Edit “/etc/apache2/mods-available/fcgid.conf” and add:

FcgidMaxRequestLen  1073741824

Apache2 worker vs prefork for ISPConfig benchmark

I’ve been running ISPConfig latest version(3.0.4) on Amazon cloud t1.micro instance for some time to host several small sites, mostly WordPress. I’m quite happy with the performance of the instance. The OS is Ubuntu 10.04 LTS. Until recently I’ve used the default mpm which is prefork, but I decided to test out worker also. If you are wondering I use mod_fcgid for all the sites. That being said I performed several tests with ab (apache benchmark) to see which mpm can server most requests per second.

While I do not claim this is the best setup, I think worker is better suited for me. Some people said they had problems because of mpm worker. So far so good, but will update this post if there are any issues.

Test results:

Concurrency Level: 32
Time taken for tests: 7.834 seconds
Complete requests: 5000
Failed requests: 0
Write errors: 0
Keep-Alive requests: 4972
Total transferred: 84831033 bytes
HTML transferred: 83206915 bytes
Requests per second: 638.27 [#/sec] (mean)
Time per request: 50.136 [ms] (mean)
Time per request: 1.567 [ms] (mean, across all concurrent requests)
Transfer rate: 10575.21 [Kbytes/sec] received
Concurrency Level: 32
Time taken for tests: 7.096 seconds
Complete requests: 5000
Failed requests: 0
Write errors: 0
Keep-Alive requests: 4968
Total transferred: 84877824 bytes
HTML transferred: 83247322 bytes
Requests per second: 704.63 [#/sec] (mean)
Time per request: 45.414 [ms] (mean)
Time per request: 1.419 [ms] (mean, across all concurrent requests)
Transfer rate: 11681.17 [Kbytes/sec] received

Creating consistent backups for EBS with EXT4 and quota

What’s this about?
Data security and backups are very important aspects when you work with servers, especially if you are using cloud infrastructure. I am using AWS(Amazon Web Services) as my preferred IaaS, so the following how-to is tailored for Amazon EC2 instances using EBS as storage for the web sites files. On my instance I have Ubuntu 10.04 LTS installed and on top of it I run ISPConfig 3.0.4(latest version at the moment I write this article). Some of the programs required to run this setup were already installed, but it should be pretty obvious if you miss anything. If you need help you can either leave a comment or contact me via email.

The following setup will allow you to create an EBS using EXT4 as file system, with quota enabled on it(for ISPConfig) and weekly backups of the EBS. In case of instance failure you should be able to launch a new instance and attach the EBS, without losing any web sites files. In case of EBS failure you can recreate one from the most recent snapshot.

Create an EBS in the same zone as your instance and attach it to your instance as /dev/sdf. This can be easily done from AWS Management Console.

Install xfsprogs

sudo apt-get install xfsprogs

Create EXT4 filesystem on /dev/sdf

sudo mkfs.ext4 /dev/sdf

Now mount it temporarily

sudo mkdir /mnt/ebs
sudo mount /dev/sdf /mnt/ebs

Stop the apache2 web server and copy the files to /mnt/ebs

sudo service apache2 stop
cd /mnt/ebs
sudo cp -rp /var/www/* .

Prepare quota

touch quota.user
sudo chmod 600 quota.*

Add the entry to /etc/fstab

/dev/sdf /var/www ext4 noatime,nobootwait,usrjquota=quota.user,,jqfmt=vfsv0 0 0

Unmount the EBS and remount it to /var/www

sudo umount /dev/sdf
sudo mount /dev/sdf /var/www -o noatime,usrjquota=quota.user,,jqfmt=vfsv0

Enable quota

sudo quotacheck -avugm
sudo quotaon -avug

Start the apache2 web server and check that the web sites are working properly

sudo service apache2 start

Install ec2-consistent-snapshot script for weekly backups of EBS

sudo add-apt-repository ppa:alestic
sudo apt-get update
sudo apt-get install -y ec2-consistent-snapshot

Prepare first snapshot(I assume the cron will run as root user, hence I create the awssecret file in /root directory)

sudo touch /root/.awssecret
sudo chmod 600 /root/.awssecret

Edit .awssecret and add following lines, in this order, replacing ACCESS_KEY_ID and SECRET_ACCESS_KEY with your own, both can be found under Account->Security Credentials:


Test the snapshot creation with debug mode activated, replace VOLUME_ID with the right volume ID:

sudo ec2-consistent-snapshot --debug --description "snapshot $(date +%Y-%m-%d-%H:%M:%S)" --freeze-filesystem /var/www vol-VOLUME_ID

If everything went well you should be able to see your new snapshot in the AWS Management Console.

Finally add this to your root crontab (by running sudo crontab -e):

@weekly /usr/bin/ec2-consistent-snapshot --debug --description "snapshot $(date +'%Y-%m-%d %H:%M:%S')" --freeze-filesystem /var/www vol-VOLUME_ID>>/var/log/backup.log 2>&1

Make sure you put the correct VOLUME_ID!

This should be all, you now have all your web sites on EBS, quota is enabled and weekly backups enabled. I think I pretty much nailed everything you need in order to perform this setup, but if there are any issues feel free to leave a comment. Also I love getting feedback so if you found this article useful leave a comment also 🙂

mod_rewrite in action

After switching from blogspot to my own wordpress blog I noticed a lot of 404s. These were triggered by the url change, because initially the articles on blogspot had html extension at the end of url while on wordpress there wasn’t such thing. For example if initially the link was now it become

Quick fix via mod_rewrite, simply edit .htaccess from the root of the website and add this line after RewriteBase:

RewriteRule ^(.*).html$ $1/ [R=301,NC,L]

R=301 means Redirect with 301 code(moved permanently)
NC=no case or case insensitive
L=last rule

Small benchmark using ab on ec2 instances

I’ve performed a few small benchmarks on EC2 recently on m1.small and c1.medium using ab(Apache HTTP server benchmarking tool). The command used was:

ab -n 1000 -c 10 localhost/

n is the number of requests
c is the number of concurent requests

I’ve used localhost to measure the performance of the instance without taking into consideration the bandwidth.

The image used was ami-bac420d3 aka scalr app, 32 bit machine.

m1.small gave a very bad result, only 6-8 requests/second.
c1.small gave somewhat a better result, but still a long way to go… 28-30 requests/second.
On a production server, which already had traffic on it I get somewhere around 60 requests/second.

As you can see m1.small is good only for playing around with Amazon service, but not for real stuff.

I know there are a lot of things that can be done to improve performance and so on, but just wanted to show you all some results.