Install PHP Application and WordPress Alongside Each Other

Problem

You have a webpage like http://www.example.com and you would like to serve both WordPress and files from a PHP application from the root domain. For instance, opening

http://www.example.com/my-post

will open a post on WordPress and

http://www.example.com/index.php

will open a page in a PHP application.

Solution

  • Set up your php application in /var/www/html
  • Install WordPress /usr/share/wordpress
  • Put the following lines into /usr/share/wordpress/.htacces before # BEGIN WordPress.
<IfModule mod_rewrite.c>
RewriteEngine On
RewriteBase /
RewriteCond "/var/www/html%{REQUEST_URI}" -f
RewriteRule ^/?(.*)$ /app/$1 [L]

</IfModule>

# BEGIN WordPress
...
  • Put the following line into /etc/httpd/conf/httpd.conf
Alias /app /var/www/html
  • Add the following configuration file to /etc/httpd/conf.d/wordpress.conf
Alias /wordpress /usr/share/wordpress
DocumentRoot /usr/share/wordpress

<Directory /usr/share/wordpress/>
 AddDefaultCharset UTF-8

AllowOverride All

Require all granted

</Directory>

Restart httpd and you are all done!

 

 

 

Versioning WordPress with Git and Revisr

WordPress is a powerful platform to just get a simple website up and running. Unfortunately, some things which are considered best practice in software development projects are a bit difficult to realize with WordPress. One of these things is versioning. Thankfully, there is a powerful plug in which enables versioning WordPress using a git repository: Revisr.

As of writing this, one is first greeted by an error message when visiting the Revisr website (something to do with SSL). It is safe to ignore this and to instruct your browser to show this website irrespective of the error message displayed (you won’t be giving any confidential information to this webiste, just browsing around).

In any case, you can download Revisr from WordPress with the following link:

https://wordpress.org/plugins/revisr/

Following the steps for setting it up:

  • Go to your WordPress Admin console
  • Install the plugin
  • Activate it
  • Reload  WordPress Admin console
  • Click on Revisr on the Sidebar
  • Instruct it to create a new repository

That’s already it for setting up a local git repo that will enable some versioning. However, you can also use this plugin to backup your site and all versions to a remote git repository, for instance using BitBucket. The instructions for this come as follows (assuming you are using an Apache Web server):

  • Login to your server using SSH with a user with sudo rights
  • Execute the following
sudo ssh keygen
  • Follow the prompts to create the key
  • Execute the following (where /var/www is the root dir of your Apache server)
sudo cp -r /root/.ssh /var/www

sudo chown -R apache:apache /var/www/.ssh
  • Create the file /var/www/.ssh/.htaccess and put in the following content (this is just a security measure)
Deny from all
  • Grab the public key and save it somewhere
sudo cat /var/www/.ssh/id_rsa.pub
  • Create a new account for BitBucket if you don’t have one already.
  • Add a public SSH key for your account. Add the SSH key you saved earlier.
  • Create a new repository. Grab the git SSH link for this repository.
  • Go back to your WordPress Admin console and select the Revisr plugin from the sidebar
  • Go to Settings / General. Set your username to git and define an email. Click Save
  • Go to Settings / Remote. Set Remote URL to the SSH link for the repository you saved earlier. Click Save.

Now you can go back to the main Revisr page and start committing changes and pushing to the remote repository!

 

Selenium ChromeDriver Hangs on Startup

Problem

ChromeDriver and Headless Chrome are a great solution for running automated JavaScript tests.

Today I wanted to run some tests on a Linux Server (Centos 7) and although Chrome and ChromeDriver were installed correctly, my Java app would just hang after ChromeDriver is started:

Starting ChromeDriver 2.33.506092 (733a02544d189eeb751fe0d7ddca79a0ee28cce4) on port 31042
Only local connections are allowed.

After a while, it would show the following error:

Exception in thread "main" org.openqa.selenium.WebDriverException: unknown error: Chrome failed to start: exited abnormally
 (Driver info: chromedriver=2.33.506092 (733a02544d189eeb751fe0d7ddca79a0ee28cce4),platform=Linux 2.6.32-042stab112.15 x86_64) (WARNING: The server did not provide any stacktrace information)

Solution

It turns out that the problem was that Chrome does not like to run on Linux operating systems when it is started by the root user.

The quick solution is to add the following argument when launching Chrome:

ChromeOptions chromeOptions = new ChromeOptions();
chromeOptions.addArguments("--no-sandbox");

However, this unfortunately has security implications and thus it would be best to find a way to run the Java app which launches ChromeDriver without the root user account.

 

 

 

 

Install Latest JDK on Linux Server

To install the Oracle JDK on a Linus server is often a tricky proposition. For one, the download page requires to confirm a prompt and only unlocks the download link after this prompt has been confirmed (via a cookie, I think). This makes it difficult to download the binary in the first place!

Thankfully, MaxdSre has created the following handy script to download and extract the JDK:

OracleSEJDK.sh

If you run this script, you are presented with a prompt as follows:

select

Just select the version you require, and the script will download and install the Oracle JDK.

Finally, you might have existing JDK versions installed on your machine which are managed using alternatives. For reference how to point your ‘java’ command to the new installation, please see this article.

 

Improving Node.js https request performance

The HTTPS module of Node.js allows making HTTPS request to other servers. Unfortunately, making requests with this module often leads to poor performance.

I found that calling a nearby HTTPS server usually took between 300 ms and 150 ms.

With the following simple solution, I was able to reduce this time to less than 40 ms.

By default, Node.js does not keep SSL connections alive. Thus, a new handshake has to be performed for every request. If you make many requests to the same server (a very common situation when dealing with APIs), it makes a lot of sense to keep SSL connections alive. This can be accomplished as follows:

var agent = new https.Agent({
 keepAlive: true
});

var options = {
 host: 'objecthub.io',
 port: 443,
 path: '/admin/metrics/main.json',
 method: 'GET',
 agent: agent
};

var req = https.request(options, function(res) {

  var str = "";
  res.on('data', function (chunk) {
     str += chunk;
  });

  res.on('end', function () {
     // done
  });
});

req.write('');
req.end();

req.on('error', function(e) {
   // error
});

Note here that a https.Agent is created with the parameter keepAlive: true. This agent is then passed in the options for the request. You can use the same agent for all requests.

Using this should significantly speed up making requests to the same server.

You can find the example code I used to test this here:

https://repo.on.objecthub.io/object/nodejs-https-performance-test

More Resources

Simple MySQL / MariaDB Backup

There are many ways to back up a MySQL or MariaDB server. Some ways include using mysqldump, mydumper, LVM Snapshots or XtraBackup. However, any robust backup solution boils down to one key requirements:

The ability to restore the databases to a point-in-time.

So, for instance, if your server crashes, you would like to be able to restore to the point in time before the server crashed. If data was deleted accidentally or damaged in some other way, you need to restore the data to the point in time before it was deleted or damaged.

If you use AWS RDS this ability is provided out of the box. However, you can meet this requirement much more cost effectively by using a simple VPS (such as Linode or Ramnode) with a simple setup I will describe below.

This setup will perform the following:

  • Write a log of all transactions and back up this log every 5 minutes to a remote server
  • Create a full database backup with mysqldump every day and copy this to a remote server

The backup keeps the binary logs for two days and the dumps for two weeks. Thus, any data loss should be limited to 5 minutes and full database backups should allow restoring data from up to two weeks ago.

System Landscape

  • Server 1: Runs the MariaDB or MySQL instance you want to back up
  • Server 2: Used as a remote location to store backups

(any Linux based server will do for these)

 Step 1: Enable Binary Logs

On Server 1:

  • Edit your my.cnf file (e.g. under /etc/my.cnf or /etc/my.cnf.d/server.cnf). Assert the following lines:
log-bin=logs/backup
expire-logs-days=2
server-id=1
  • Create the folder logs in your MySQL data dir (e.g. /var/lib/mysql)
mkdir /var/lib/mysql/logs
  • Set owner to user mysql for folder logs
chown mysql:mysql /var/lib/mysql/logs
  • Restart MySQL server
sudo systemctl restart mysqld

Now a binary logs should be written into the logs folder in your MySQL data dir.

Step 2: Create Script Full Backups with MySQL dump

On Server 1:

  • Create the folder /var/lib/mysql/dumps
  • Create the script /usr/local/mysql_dump.sh and copy the contents of mariadb-backup.sh into this script.
  • Search for the line starting with dumpopts. In this line, provide your mysql username and password.
  • Make the script executable
sudo chmod +x /usr/local/mysql_dump.sh
  • Schedule the script to run once every day using cron or systemd

cron

30 3 * * * /usr/local/mysql_dump.sh

systemd – service definition

Create /etc/systemd/system/mysql_dump.service

[Unit]
Description=Dumps mysql databases to backup directory

[Service]
Type=oneshot
ExecStart=/usr/local/mysql_dump.sh

systemd – timer definition

Create /etc/systemd/system/mysql_dump.timer

[Unit]
Description=Run MySQL dump once per day

[Timer]
OnCalendar=*-*-* 03:13:00

And don’t forget to enable the timer:

sudo systemctl start mysql_dump.timer

Step 3: Write Script to Backup Files to Remote Server

On Server 2:

  • Log into your second server. Create a user mysqlbackup here:
useradd mysqlbackup
  • Change to mysqlbackup user
sudo su - mysqlbackup
  • Create directories logs and dumps
mkdir logs
mkdir dumps

On Server 1:

  • Copy public key for root user from /root/.ssh/id_rsa.pub
  • If the public key for root does not exist, run:
sudo ssh-keygen -t rsa

On Server 2:

  • While being logged in as user mysqlbackup, assure the following file exists
~/.ssh/authorized_keys
  • Into this file, paste the public key for root on Server 1
  • Assure correct permissions for .ssh folder:
chmod 700 .ssh
chmod 600 .ssh/authorized_keys

On Server 1:

  • Test access to Server 2 (the sudo is important here, since you want to connect as the root user). Replace yourservername.com with the address/IP of Server 2.
sudo ssh mysqlbackup@yourservername.com
  • If the SSH does not work for some reason, check this guide for more information.
  • Create the script /usr/local/mysql_backup.sh. Replace yourserver.com with the address/IP of your server.
#!/bin/bash
rsync -avz --delete /var/lib/mysql/logs mysqlbackup@yourserver.com:/home/mysqlbackup
rsync -avz --delete /var/lib/mysql/dumps mysqlbackup@yourserver.com:/home/mysqlbackup
  • Make the script executable
sudo chmod +x /usr/local/mysql_backup.sh
  • Add the following line to the crontab for the user root
*/5 * * * * /usr/local/mysql_backup.sh
  • If you are using systemd, create the following two files

/etc/systemd/system/mysql_backup.service

[Unit]
Description=Backs up Mysql binary logs and full backups to remote server

[Service]
Type=oneshot
ExecStart=/usr/local/mysql_backup.sh

/etc/systemd/system/mysql_backup.timer

[Unit]
Description=Run MySQL binlog backup and full backup sync every 5 minutes

[Timer]
OnCalendar=*:0/5

Then start the timer with

sudo systemctl start mysql_backup.timer

Resize EC2 Volume (without Resizing Partition)

Problem

You would like to resize a volume attached to an EC2 instance.

Solution

Do the following:

  • Create a snapshot of your volume (instructions)
  • Stop your instance
  • Go to EBS / Volumes and select Actions / Modify Volume

Modify Vol

  • Enter the new size for your volume (note you can only ever make the volume larger) and click on Modify

size

  • Wait for the modification to be complete (this might take a while, like 30 min or so)
  • Start your instance

Now, if everything went well, you should have more space available on the disk for the virtual machine. To confirm this, run:

df -h

You should see the new size of the volume as the size of your main partition:

size2

Notes

  • If the size of your partition, does not match the size of the volume, you probably need to resize your partition (instructions).
  • Resizing the partition is a very painful process, that I think should best be avoided at all costs. I think for this it helps if the EC2 instance attached to the volume is stopped when the resize is performed. Assure that this is the case before you do the resize.
  • If you forgot to stop your instance, and need to do a partition resize, there is a little workaround. Wait for six hours, then resize your volume again (this time while the instance is stopped). Then, it hopefully adjusts your partition size to the correct size.
  • In the above, you might be able to start up your instance even while the new volume is still optimizing. I haven’t tested this though but my guess is that it would work.