Improving Node.js https request performance

The HTTPS module of Node.js allows making HTTPS request to other servers. Unfortunately, making requests with this module often leads to poor performance.

I found that calling a nearby HTTPS server usually took between 300 ms and 150 ms.

With the following simple solution, I was able to reduce this time to less than 40 ms.

By default, Node.js does not keep SSL connections alive. Thus, a new handshake has to be performed for every request. If you make many requests to the same server (a very common situation when dealing with APIs), it makes a lot of sense to keep SSL connections alive. This can be accomplished as follows:

var agent = new https.Agent({
 keepAlive: true
});

var options = {
 host: 'objecthub.io',
 port: 443,
 path: '/admin/metrics/main.json',
 method: 'GET',
 agent: agent
};

var req = https.request(options, function(res) {

  var str = "";
  res.on('data', function (chunk) {
     str += chunk;
  });

  res.on('end', function () {
     // done
  });
});

req.write('');
req.end();

req.on('error', function(e) {
   // error
});

Note here that a https.Agent is created with the parameter keepAlive: true. This agent is then passed in the options for the request. You can use the same agent for all requests.

Using this should significantly speed up making requests to the same server.

You can find the example code I used to test this here:

https://repo.on.objecthub.io/object/nodejs-https-performance-test

More Resources

Simple MySQL / MariaDB Backup

There are many ways to back up a MySQL or MariaDB server. Some ways include using mysqldump, mydumper, LVM Snapshots or XtraBackup. However, any robust backup solution boils down to one key requirements:

The ability to restore the databases to a point-in-time.

So, for instance, if your server crashes, you would like to be able to restore to the point in time before the server crashed. If data was deleted accidentally or damaged in some other way, you need to restore the data to the point in time before it was deleted or damaged.

If you use AWS RDS this ability is provided out of the box. However, you can meet this requirement much more cost effectively by using a simple VPS (such as Linode or Ramnode) with a simple setup I will describe below.

This setup will perform the following:

  • Write a log of all transactions and back up this log every 5 minutes to a remote server
  • Create a full database backup with mysqldump every day and copy this to a remote server

The backup keeps the binary logs for two days and the dumps for two weeks. Thus, any data loss should be limited to 5 minutes and full database backups should allow restoring data from up to two weeks ago.

System Landscape

  • Server 1: Runs the MariaDB or MySQL instance you want to back up
  • Server 2: Used as a remote location to store backups

(any Linux based server will do for these)

 Step 1: Enable Binary Logs

On Server 1:

  • Edit your my.cnf file (e.g. under /etc/my.cnf or /etc/my.cnf.d/server.cnf). Assert the following lines:
log-bin=logs/backup
expire-logs-days=2
server-id=1
  • Create the folder logs in your MySQL data dir (e.g. /var/lib/mysql)
mkdir /var/lib/mysql/logs
  • Set owner to user mysql for folder logs
chown mysql:mysql /var/lib/mysql/logs
  • Restart MySQL server
sudo systemctl restart mysqld

Now a binary logs should be written into the logs folder in your MySQL data dir.

Step 2: Create Script Full Backups with MySQL dump

On Server 1:

  • Create the folder /var/lib/mysql/dumps
  • Create the script /usr/local/mysql_dump.sh and copy the contents of mariadb-backup.sh into this script.
  • Search for the line starting with dumpopts. In this line, provide your mysql username and password.
  • Make the script executable
sudo chmod +x /usr/local/mysql_dump.sh
  • Schedule the script to run once every day using cron or systemd

cron

30 3 * * * /usr/local/mysql_dump.sh

systemd – service definition

Create /etc/systemd/system/mysql_dump.service

[Unit]
Description=Dumps mysql databases to backup directory

[Service]
Type=oneshot
ExecStart=/usr/local/mysql_dump.sh

systemd – timer definition

Create /etc/systemd/system/mysql_dump.timer

[Unit]
Description=Run MySQL dump once per day

[Timer]
OnCalendar=*-*-* 03:13:00

And don’t forget to enable the timer:

sudo systemctl start mysql_dump.timer

Step 3: Write Script to Backup Files to Remote Server

On Server 2:

  • Log into your second server. Create a user mysqlbackup here:
useradd mysqlbackup
  • Change to mysqlbackup user
sudo su - mysqlbackup
  • Create directories logs and dumps
mkdir logs
mkdir dumps

On Server 1:

  • Copy public key for root user from /root/.ssh/id_rsa.pub
  • If the public key for root does not exist, run:
sudo ssh-keygen -t rsa

On Server 2:

  • While being logged in as user mysqlbackup, assure the following file exists
~/.ssh/authorized_keys
  • Into this file, paste the public key for root on Server 1
  • Assure correct permissions for .ssh folder:
chmod 700 .ssh
chmod 600 .ssh/authorized_keys

On Server 1:

  • Test access to Server 2 (the sudo is important here, since you want to connect as the root user). Replace yourservername.com with the address/IP of Server 2.
sudo ssh mysqlbackup@yourservername.com
  • If the SSH does not work for some reason, check this guide for more information.
  • Create the script /usr/local/mysql_backup.sh. Replace yourserver.com with the address/IP of your server.
#!/bin/bash
rsync -avz /var/lib/mysql/logs mysqlbackup@yourserver.com:/home/mysqlbackup
rsync -avz /var/lib/mysql/dumps mysqlbackup@yourserver.com:/home/mysqlbackup
  • Make the script executable
sudo chmod +x /usr/local/mysql_backup.sh
  • Add the following line to the crontab for the user root
*/5 * * * * /usr/local/mysql_backup.sh
  • If you are using systemd, create the following two files

/etc/systemd/system/mysql_backup.service

[Unit]
Description=Backs up Mysql binary logs and full backups to remote server

[Service]
Type=oneshot
ExecStart=/usr/local/mysql_backup.sh

/etc/systemd/system/mysql_backup.timer

[Unit]
Description=Run MySQL binlog backup and full backup sync every 5 minutes

[Timer]
OnCalendar=*:0/5

Then start the timer with

sudo systemctl start mysql_backup.timer

Resize EC2 Volume (without Resizing Partition)

Problem

You would like to resize a volume attached to an EC2 instance.

Solution

Do the following:

  • Create a snapshot of your volume (instructions)
  • Stop your instance
  • Go to EBS / Volumes and select Actions / Modify Volume

Modify Vol

  • Enter the new size for your volume (note you can only ever make the volume larger) and click on Modify

size

  • Wait for the modification to be complete (this might take a while, like 30 min or so)
  • Start your instance

Now, if everything went well, you should have more space available on the disk for the virtual machine. To confirm this, run:

df -h

You should see the new size of the volume as the size of your main partition:

size2

Notes

  • If the size of your partition, does not match the size of the volume, you probably need to resize your partition (instructions).
  • Resizing the partition is a very painful process, that I think should best be avoided at all costs. I think for this it helps if the EC2 instance attached to the volume is stopped when the resize is performed. Assure that this is the case before you do the resize.
  • If you forgot to stop your instance, and need to do a partition resize, there is a little workaround. Wait for six hours, then resize your volume again (this time while the instance is stopped). Then, it hopefully adjusts your partition size to the correct size.
  • In the above, you might be able to start up your instance even while the new volume is still optimizing. I haven’t tested this though but my guess is that it would work.

 

Library for Parsing multipart File Upload with Java

One of the most convinient ways to upload files from the Web Browser to the server is by using file inputs in HTML forms.

Many web servers come with preconfigured modules for parsing this data on the server-side. However, sometimes, your HTTP server of choice might not offer such a module and you are left with the task of parsing the data the browser submits to the server yourself.

I specifically encountered this problem when working with a Netty-based server.

The form will most likely submit the files to your server as part of a multipart/form-data request. These are not that straightforward to parse. Thankfully, there is the library Apache Commons FileUpload which can be used for this purpose.

Unfortunately, processing some arbitrary binary data with this library is not very straightforward. This has motivated me to write a small library – delight-fileupload –  which wraps Commons FileUpload and makes parsing multipart form data a breeze. (This library is part of the Java Delight Suite).

Just include the library and let it parse your data as follows:

FileItemIterator iterator = FileUpload.parse(data, contentType);

Where data is a binary array of the data you received from the client and contentType is the content type send via HTTP header.

Then you can iterate through all the files submitted in the form as follows:

while (iter.hasNext()) {
 FileItemStream item = iter.next();
 if (item.isFormField()) {
   ... some fields in the form
 } else {
   InputStream stream = item.openStream();
   // work with uploaded file data by processing stream ...
 }
}

You can find the library on GitHub. It is on Maven Central. Just add the following dependency to your Java, Scala etc. application and you are good to go:

<dependency>
 <groupId>org.javadelight</groupId>
 <artifactId>delight-fileupload</artifactId>
 <version>0.0.3</version>
</dependency>

You can also check for the newest version on the JCenter repostiory.

I hope this is helpful. If you have any comments or suggestions, leave a comment here or raise an issue on the javadelight-fileupload GitHub project.

 

 

Set up MySQL Replication with Amazon RDS

Problem

You have an existing server that runs a MySQL database (either on EC2 or not) and you would like to replicate this server with a Amazon RDS MySQL instance.

After you follow the instructions from Amazon, your slave reports the IO status:

Slave_IO_State: Connecting to master

… and the replication does not work.

Solution

AWS provides very good documentation on how to set up the replication: Replication with a MySQL or MariaDB Instance Running External to Amazon RDS.

Follow the steps there but be aware of the following pitfall:

In step 6 `create a user that will be used for replication`: It says you should create a user for the domain ‘mydomain.com’. That will in all likelihood not work. Instead, try to find out the IP address of the Amazon RDS instance that should be the replication slave.

One way to do this is as follows:

  • Create the ‘repl_user’ for the domain ‘%’, e.g.:
CREATE USER 'repl_user'@'%' IDENTIFIED BY '<password>';
  • Also do the grants for this user
GRANT REPLICATION CLIENT, REPLICATION SLAVE ON *.* TO 'repl_user'@'%' IDENTIFIED BY '<password>';
  • Open port 3306 on your server for any IP address.
  • Then the replication should work.
  • Go to your master and run the following command:
SHOW PROCESSLIST;
  • Find the process with the user repl_user and get the IP address from there. This is the IP address for your Amazon RDS slave server.
  • Delete the user ‘repl_user’@’%’ on the master
  • Create the user ‘repl_user’@'[IP address of slave]’ on the master
  • Modify your firewall of your master to only accept connections on port 3306 from the IP address of the slave.
  • Restart replication with
call mysql.rds_stop_replication;
call mysql.rds_start_replication;
  • And check the status with
show slave status\G

The slave IO status should now be “Waiting for master to send event”.

 

 

 

Upgrade MySQL 5.5 to 5.6 on EC2/CentOS/RHEL

Problem

You would like to upgrade MySQL 5.5 to MySQL 5.6 on an existing server that uses the YUM software package manager.

Solution

Just enter the following few simple commands and you should be good to go. But, please, do a thorough full backup of your system before you do the upgrade just in case.

[1] Create a MySQL dump from which you will load the data into the upgraded server:

mysqldump -u root -p –add-drop-table –routines –events –all-databases –force > data-for-upgrade.sql

[2] Stop your MySQL server

sudo service mysqld stop

[3] Remove MySQL 5.5

yum remove mysql55-server mysql55-libs mysql55-devel mysql55-bench mysql55

[4] Clear the MySQL data directory

sudo rm -r /var/lib/mysql/*

[5] Install MySQL 5.6

sudo yum install mysql56 mysql56-devel mysql56-server mysql56-libs

[6] Start MySQL server

sudo service mysqld start

[7] Set the root password

/usr/libexec/mysql56/mysqladmin -u root password ‘xxx’

[8] Import your data

mysql -u root -p –force < data-for-upgrade.sql

[9] Verify all tables will work in 5.6

sudo mysql_upgrade -u root -p –force

All done!

Notes

  • Upgrade to 5.7 should work in a similar way, once 5.7 is available on your RPM repos (it isn’t by the time of the writing for the Amazon Linux Repo).

Sources

 

 

 

 

Fix Travis CI Error ‘This is not an active repository’

Problem

Your repositories have been building just fine using the tool Travis CI but suddenly the builds do not work anymore and the Travis CI website shows a screen with the message:

`This is not an active repository`

active

Solution

  • Go to GitHub and assure that you are logged in with the account that owns the repository.
  • Go to Travis CI and sign in with your GitHub account
  • Go to the repository
  • Click on the button ‘Active Repository’

If all works, that’s fine. However, if you get an error: ‘There was an error while trying to activate the repository.’ do the following:

  • Go to the settings for your account on Travis – Assure that the repository you want to build is enabled.

References

Travis CI Issue #5629

StackOverflow `Seeing “This is not an active repository” for an active repository`