Easy VPS Backup

I love VPS providers such as RamNode or ServerCheap which provide excellent performance at a low price point. Unfortunately, when going with most VPS providers, there are no easy built-in facilities for backing up and restoring the data of your servers (such as with AWS EC2 snapshots). Thankfully, there is some powerful, easy to use and open source software available to take care of the backups for us!

In this article, I am going to show how to easily do a backup of your VPS using restic. Another tool you might want to look at is Duplicity, which provides a higher level of security but which is also more difficult to use. (And there are a many, many other alternatives available as well.)

You will need to have access to two servers to follow the following. One server which should be backed up (in the following referred to as Backup Client) and one server which will host your backups (in the following referred to as Backup Server).

Installing Restic (on Backup Client)

  • Get the URL to the binary for you system from the latest restic release.
  • Log into the Backup Client
  • Download the binary using wget

wget https://github.com/restic/restic/releases/download/v0.8.1/restic_0.8.1_linux_amd64.bz2

  • Unzip the binary

bzip2 -dk restic_0.8.1_linux_amd64.bz2

  • Move restic to /opt

sudo mv restic_0.8.1_linux_amd64 /opt/restic

  • Make restic executable

chmod +x /opt/restic

Establishing SSH Connection

  • On the Backup Client generate an SSH private and public key (Confirm location `/root/.ssh/id_rsa` and provide no passphrase)
sudo su - root
ssh-keygen -t rsa -b 4096
  • Get the public key

cat /root/.ssh/id_rsa.pub 

  • On the Backup Server, create a new user called backup
  • Copy the public key from the Backup Client to the Backup Server so that Backup Client is authorised to access it via SSH. Just copy the output from above and paste it at the end of the authorized_keys file

sudo vi /home/backup/.ssh/authorized_keys

  • On the Backup Client, test the connection to the Backup Server.

sudo ssh backup@...

Perform Backup (on Backup Client)

/opt/restic -r sftp:backup@[backup-server]:/home/backup/[backup client host name] init

  • Backup the full hard disk (this may take a while!)

/opt/restic --exclude={/dev,/media,/mnt,/proc,/run,/sys,/tmp,/var/tmp} -r sftp:backup@[backup-server]:/home/backup/[backup client host name] backup /


Schedule Regular Backups (Backup Client)

  • On the Backup Client, create the file /root/restic_password. Paste your password into this file.
  • Create the script file /root/restic.sh (replace with the details of your servers)


/opt/restic -r sftp:backup@[backup-server]:/home/backup/[backup client host name] --password-file=/root/restic_password --exclude={/dev,/media,/mnt,/proc,/run,/sys,/tmp,/var/tmp} backup /
/opt/restic -r sftp:backup@[backup-server]:/home/backup/[backup client host name] --password-file=/root/restic_password forget --keep-daily 7 --keep-weekly 5 --keep-monthly 12 --keep-yearly 75
/opt/restic -r sftp:backup@[backup-server]:/home/backup/[backup client host name] --password-file=/root/restic_password prune
/opt/restic -r sftp:backup@[backup-server]:/home/backup/[backup client host name] --password-file=/root/restic_password check

  • Make script executable

chmod +x /root/restic.sh

  • Trail run this script: /root/restic.sh
  • If everything worked fine, schedule to run this script daily (e.g. with sudo crontab -e) or at whichever schedule you prefer (Note that the script might take 10 min or more to execute, so it is probably not advisable to run this very frequently. If you need more frequent updates, just run the first line of the script ‘backup’ which is faster than the following maintenance operations).

0 22 * * * /root/restic.sh


That’s it! All important files from your server will now be backed up regularly.

Java Logging – The Ultimate, Easy Guide

On first glance, logging looks like an exceedingly simple problem to solve. However, it is one of these problems which unfortunately become more and more complex the longer one looks at it.

I think because of this, there are many frameworks in Java to support logging (since everyone seems to have thought they have found a solution) with many of them being less than optimal, especially under load.

In effect, for someone who wants to start with logging in Java, there is an overwhelming, confusing and often contradictory wealth of resources available. In this guide, I will provide an introduction to Java logging in three simple steps: First, to choose the right framework. Second, to get your first log printed out onto the screen. And, third, to explore more advanced logging topics. So, without further ado, here the steps to get you started with Java logging:


The first question to sort out when considering logging for Java is to decide which logging framework to use. Unfortunately, there are quite a few to choose from.

The standard Java logging seems to be very unpopular. Further, it seems that Log4j and Logback both have architectural disadvantages to Log4j 2. In specific in respect to the performance impact which logging has on the host app. Loggly ran some tests on the different logging frameworks and the theoretical advantages of Log4j 2 also seem to be reflected in cold, hard data.

Thus, I think the prudent choice is to go with log4j2 in any but exceptional circumstances.

How To Get Started

The official documentation for Log4j 2 is not very approachable. Simply speaking, you only need to do two things to get ready for logging with Log4j 2.

The first is to add the following Maven dependency:


The second is to create the file src/main/resources/log4j2.properties in your project with the following content:

status = error
name = PropertiesConfig

filters = threshold

filter.threshold.type = ThresholdFilter
filter.threshold.level = debug

appenders = console

appender.console.type = Console
appender.console.name = STDOUT
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = %d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %m%n

rootLogger.level = debug
rootLogger.appenderRefs = stdout
rootLogger.appenderRef.stdout.ref = STDOUT

(Note, you may also provide the configuration in XML format. In that case, simply create file named log4j2.xml in src/main/resources)

Now you are ready to start logging!

import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;

public class OutputLog { 
  public static void main(String[] args) { 
    Logger logger = LogManager.getLogger(); 

Master Class

The real power of using a logging framework is realised by modifying the properties file created earlier.

You can, for instance, configure it to log into a file and rotate this log file automatically (so it doesn’t just keep on growing and growing). The following presents a properties file to enable this:

status = error
name = PropertiesConfig

property.filename = ./logs/log.txt

filters = threshold

filter.threshold.type = ThresholdFilter
filter.threshold.level = debug

appenders = rolling

appender.rolling.type = RollingFile
appender.rolling.name = RollingFile
appender.rolling.fileName = ${filename}
appender.rolling.filePattern = ./logs/log-backup-%d{MM-dd-yy-HH-mm-ss}-%i.log.gz
appender.rolling.layout.type = PatternLayout
appender.rolling.layout.pattern = %d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %m%n
appender.rolling.policies.type = Policies
appender.rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.rolling.policies.time.interval = 1
appender.rolling.policies.time.modulate = true
appender.rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.rolling.strategy.type = DefaultRolloverStrategy
appender.rolling.strategy.max = 20

loggers = rolling

logger.rolling.name = file
logger.rolling.level = debug
logger.rolling.additivity = false
logger.rolling.appenderRef.rolling.ref = RollingFile

#rootLogger.level = debug
#rootLogger.appenderRefs = stdout
rootLogger.appenderRef.stdout.ref = RollingFile

This configuration will result in a log file being written into the logs/ folder. If the application is run multiple times, previous log files will be packed into gzipped files:


For even more sophisticated logging, you would want to set up a Graylog server and then send the logs there. This can be achieved using the logstash-gelf library. Add the following Maven dependency:


And then provide a log4j.xml configuration file like the following (replace yourserver.com with your Graylog server):

<Gelf name="gelf" host="udp:yourserver.com" port="51401" version="1.1" extractStackTrace="true"
filterStackTrace="true" mdcProfiling="true" includeFullMdc="true" maximumMessageSize="8192"
<Field name="timestamp" pattern="%d{dd MMM yyyy HH:mm:ss,SSS}" />
<Field name="level" pattern="%level" />
<Field name="simpleClassName" pattern="%C{1}" />
<Field name="className" pattern="%C" />
<Field name="server" pattern="%host" />
<Field name="server.fqdn" pattern="%host{fqdn}" />

<DynamicMdcFields regex="mdc.*" />
<DynamicMdcFields regex="(mdc|MDC)fields" />
<Root level="INFO">
<AppenderRef ref="gelf" />

Then create a new GELF UDP input in Graylog (& don’t forget to open the firewall for udp port 51401) and you are ready to receive messages!


Finally, I personally find the logging frameworks with all their dependencies and insistence on configuration files exactly where they expected them a bit intrusive. Thus, I developed delight-simple-log – this very simply project can be used as a dependency in your reusable component; and then linked with Log4j 2 in the main package for an app. That way, the Log4j dependencies will only be present in one of your modules.



Setting up Prometheus and Grafana for CentOS / RHEL 7 Monitoring

As mentioned in my previous post, I have long been looking for a centralised solution for collecting logs and monitoring metrics. I think my search was unsuccessful since I was looking for too many things in one solution. Instead I found now two separate solutions, one for log management (using Graylog) and one for metrics (using Prometheus and Grafana). I deployed both of these on very inexpensive VPS machines and so far I am very happy with them.

In this post, I provide some pointers how to set up the metrics solution based on Prometheus and Grafana. I assume you are using a RHEL / CentOS system as the server hosing Prometheus and Grafana and you are interested in the OS metrics for a CentOS system. This tutorial will guide you through setting up the Prometheus server, collecting metrics for it using node_exporter and finally how to create dashboards and alerts using Grafana for the collected metrics.

Installing Prometheus Server

  • Follow the excellent instructions here with the following modifications.
    • Make sure to download the latest version of Prometheus (the link can be obtained on the Prometheus download page, this guide works with version 2.1.0)
    • For the systemd service, use the following file:

Description=Prometheus Server
ExecStart=/opt/prometheus/prometheus --config.file=/opt/prometheus/prometheus.yml

  • If you are using a firewall, add the following rule to /etc/sysconfig/iptables and restart service iptables:

-A INPUT -p tcp -m state --state NEW -m tcp --dport 9090 -j ACCEPT

Viewing a Metric

You should the data of http requests served by the Prometheus server itself. If you click on the tab graph, you can see the data as a graph.

Installing Node Exporter on the Server to be Monitored

tar xvfz node_exporter-*.tar.gz
  • Create link (replace 0.15.2 with the version you have downloaded)

ln -s node_exporter-0.15.2.linux-amd64 node_exporter

  • Define a service for node_exporter in /etc/systemd/system/node_exporter.service

(or if you are using init.d, please see this article).

Description=Node Exporter

ExecStart=/opt/node_exporter/node_exporter --no-collector.diskstats


(The –no-collector.diskstats is added above since diskstats often does not work in virtualized environments. If that is not an issue for you, be free to leave this argument out.)

  • Enable and start service

systemctl daemon-reload
systemctl start node_exporter
systemctl enable node_exporter

  • Tell Prometheus to scrape these metrics by adding the following to /opt/prometheus/prometheus.yml
 - job_name: "node"
- targets: ['localhost:9100']

Now if you got to yourserver.com:9090/graph you can for instance enter the expression node_memory_MemFree and see the free memory available on the server.

You can also install node_exporter on another server. Simply point the job definition then to this servers address; and of course remember to open port 9100 on the server.

Installing Grafana

The default Prometheus interface is quite basic. Thankfully Grafana offers excellent integration with Prometheus and will result in a much nicer UI.

You can easily install Grafana on your own server or use a free cloud-based instance (limited to one user and five dashboards).

To install Grafana locally:

  • First follow these instructions.
  • Graphana by default runs on port 3000, so make sure you add the following firewall rule after you install it:

-A INPUT -p tcp -m state --state NEW -m tcp --dport 3000 -j ACCEPT

  • In the file /etc/grafana/grafana.ini, provide details for an SMTP connection which can be used for sending emails (section [smtp]).
  • Also update the host name in the field domain to the address at which your server can be reached on the internet.

Configuring Grafana

  • Go to yourserver.com:3000
  • The default login is username ‘admin’ and password ‘admin’. Create a new user with a good password and delete the admin user.
  • First connect with your Prometheus instance as a data source.
  • Then go to Dashboards and select import


Done! You should now be able to see the metrics for your server such as CPU usage or free memory.


If you monitor multiple servers, you can switch between them by clicking next to the text ‘node’.


Additional servers will appear here if you add them to the Prometheus configuration:

- job_name: "node"
 - targets: ['localhost:9100', 'xxxxx']

Configure Alerting

While Prometheus has some build in alerting facilities, alerting in Grafana is much easier to use. To set up altering for the dashboard you have created:

  • Go to Alerting / Notification Channels
  • Click on New Channel
  • Provide a name for the channel and your email address and click Save.


  • Next go to the dashboard you have created: Node Exporter Server Metrics
  • Click on the first panel to select it


  • Next click on Edit in the menu which is shown above the panel
  • Go to the Alert tab and click on Create Alert

create alert

  • Configure the following condition(for more details about this, please see this page):


  • Select the Notification page on the right
  • In Send to, select the notification channel you have created earlier.
  • Provide a message such as: CPU usage high.
  • Save the dashboard (Ctrl+S)

Done! You should now receive notifications if the CPU usage on any of the servers monitored on this dashboard is too high.

Further Reading

Setting Up Graylog Server

I have been looking around for an easy to use and reasonable priced solution for managing logs distributed among many servers and system metrics for these servers. I had a brief look into setting up an ELK system but I found that looked quite cumbersome. Recently I came across Graylog and I found it looked quite promising. I thus set up a little sample system.

While the documentation for Graylog is generally quite good, I found it a bit difficult to piece together the various steps in setting up a minimal working system. Thus I have documented these steps below!

Installing Graylog and Dependencies

Just follow the excellent CentOS installation instructions from the Graylog documetation.

Make sure to provide details for sending emails under the header # Email transport.

If you are using a firewall, open ports 9000 for TCP and 51400 for UPD. For instance, by assuring the following lines are in /etc/sysconfig/iptables.

-A INPUT -p tcp -m state --state NEW -m tcp --dport 9000 -j ACCEPT
-A INPUT -p udp -m state --state NEW -m udp --dport 51400 -j ACCEPT

Don’t forget to restart the iptables service: sudo systemctl restart iptables.

Collecting the Logs from Another CentOS System

  • Install rsyslog on the system

sudo yum install rsyslog

  • Enable and start rsyslog service (also see this guide)

sudo systemctl enable rsyslog

sudo systemctl start rsyslog

  • Edit the file /etc/rsyslog.conf and put the following line at the end, into the section marked as # ### begin forwarding rule ### (replace yourserver.com with your graylog server address.

*.* @yourserver.com:51400;RSYSLOG_SyslogProtocol23Format

  • Restart rsyslog

sudo systemctl restart rsyslog

The rsyslog log messages should now be getting send to your server. Give it a few minutes if you don’t see the messages in graylog immediately. Otherwise, check the system log for any errors (sudo cat /var/log/messages).

Also, you can test the connection by entering the following on the monitored system:

nc -u yourserver.com 51400

This should result in the message Hi being received by graylog.

Analysing Logs

The next steps are quite easy to to since they can be done in the excellent graylog user interface.

critical errors

  • Create an alert. Trigger it when there is ‘more than 0’ messages in the stream you have just created.

Done! You are now collecting logs from a server and you will receive an email notification whenever there is a serious issue reported on the server!

Installing Jenkins on Centos 7

I set up a Jenkins server on a brand new Centos 7 VPS. In the following the instructions for doing this in case you are looking at doing the same:

Setting up Jenkins Server

sudo yum install java-1.8.0-openjdk
sudo wget -O /etc/yum.repos.d/jenkins.repo http://pkg.jenkins-ci.org/redhat/jenkins.repo
sudo rpm --import https://jenkins-ci.org/redhat/jenkins-ci.org.key
sudo yum install jenkins

Or for stable version (link did not work for me when I tried it)

sudo wget -O /etc/yum.repos.d/jenkins.repo https://pkg.jenkins.io/redhat-stable/jenkins.repo
sudo rpm --import https://pkg.jenkins.io/redhat-stable/jenkins.io.key
yum install jenkins
  • Start Jenkins server
sudo systemctl start jenkins

You should now be able to access Jenkins at yourserver.com:8080 (if not, see troubleshooting steps at the bottom).

If you want to access your server more securely on port 80, you can do so by installing ngnix as outlined in this article in step 4: How to Install Jenkins on CentOS 7.

Connecting to a Git Repo

You will probably want to connect to a git repository next. This is also somewhat dependent on the operating system you use, so I provide the steps to do this on CentOS as well:

  • Install git
sudo yum install git
  • Generate an SSH key on the server
ssh-keygen -t rsa
  • When prompted, save the SSH key under the following path (I got this idea from reading the comments here)
  • Assure that the .ssh directory is owned by the Jenkins user:
sudo chown -R jenkins:jenkins /var/lib/jenkins/.ssh
  • Copy the public generated key to your git server (or add it in the GitHub/BitBucket web interface)
  • Assure your git server is listed in the known_hosts file. In my case, since I am using BitBucket my /var/lib/jenkins/.ssh/known_hosts file contains something like the following
bitbucket.org, ssh-rsa [...]
  • You can now create a new project and use Git as the SCM. You don’t need to provide any git credentials. Jenkins pulls these automatically form the /var/lib/jenkins/.ssh directory. There are good instructions for this available here.

Connecting to GitHub

  • In the Jenkins web interface, click on Credentials and then select the Jenkins Global credentials. Add a credential for GitHub which includes your GitHub username and password.
  • In the Jenkins web interface, click on Manage Jenkins and then on Configure System. Then scroll down to GitHub and then under GitHub servers click the Advanced Button. Then click the button Manage additional GitHub actions.

additional actions

  • In the popup select Convert login and password to token and follow the prompts. This will result in a new credential having been created. Save and reload the page.
  • Now go back to the GitHub servers section and now click to add an additional server. As credential, select the credential which you have just selected.
  • In the Jenkins web interface, click on New Item and then select GitHub organisation and connect it to your user account.

Any of your GitHub projects will be automatically added to Jenkins, if they contain a Jenkinsfile. Here is an example.

Connect with BitBucket

  • First, you will need to install the BitBucket plugin.
  • After it is installed, create a normal git project.
  • Go to the Configuration for this project and select the following option:

BitBucket trigger

  • Log into BitBucket and create a webhook in the settings for your project pointing to your Jenkins server as follows: http://youserver.com/bitbucket-hook/ (note the slash at the end)

Testing a Java Project

Chances are high you would like to run tests against a Java project, in the following, some instructions to get that working:


  • If you cannot open the Jenkins web ui, you most likely have a problem with your firewall. Temporarily disable your firewall with: `sudo systemctl stop iptables` and see if it works then.
  • If it does, you might want to check your rules in `/etc/sysconfig/iptables` and assure that port 8080 is open
  • Check the log file at:
sudo cat /var/log/jenkins/jenkins.log


Test Latency Between Two Servers (Linux)

Today I was looking for a simple way to test the latency and bandwidth between two Linux servers.

The easiest way, of course, is to just use ping. The ping utility should be available on almost any Linux server and is extremely easy to use. Just login to one of your servers and then execute the following command using the IP address of your second server:

ping x.x.x.x

You can leave this running for a while and when you have seen enough data, just hit Ctrl + C to interrupt the program. This will result in an output such as the following:

PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=64 time=0.180 ms
64 bytes from icmp_seq=2 ttl=64 time=0.150 ms
64 bytes from icmp_seq=3 ttl=64 time=0.148 ms
64 bytes from icmp_seq=4 ttl=64 time=0.150 ms
--- ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3000ms
rtt min/avg/max/mdev = 0.148/0.157/0.180/0.013 ms

Important to note here are are the latencies for the individual tests as well as the overall average which are highlighted in bold in the above. This shows us that there is an average latency of 0.157 between the two servers tested.

In order to test the bandwidth and get some more information about latencies, you might also want to install the iperf tool.

Resize EC2 Volume (without Resizing Partition)


You would like to resize a volume attached to an EC2 instance.


Do the following:

  • Create a snapshot of your volume (instructions)
  • Stop your instance
  • Go to EBS / Volumes and select Actions / Modify Volume

Modify Vol

  • Enter the new size for your volume (note you can only ever make the volume larger) and click on Modify


  • Wait for the modification to be complete (this might take a while, like 30 min or so)
  • Start your instance

Now, if everything went well, you should have more space available on the disk for the virtual machine. To confirm this, run:

df -h

You should see the new size of the volume as the size of your main partition:



  • If the size of your partition, does not match the size of the volume, you probably need to resize your partition (instructions).
  • Resizing the partition is a very painful process, that I think should best be avoided at all costs. I think for this it helps if the EC2 instance attached to the volume is stopped when the resize is performed. Assure that this is the case before you do the resize.
  • If you forgot to stop your instance, and need to do a partition resize, there is a little workaround. Wait for six hours, then resize your volume again (this time while the instance is stopped). Then, it hopefully adjusts your partition size to the correct size.
  • In the above, you might be able to start up your instance even while the new volume is still optimizing. I haven’t tested this though but my guess is that it would work.


Set up MySQL Replication with Amazon RDS


You have an existing server that runs a MySQL database (either on EC2 or not) and you would like to replicate this server with a Amazon RDS MySQL instance.

After you follow the instructions from Amazon, your slave reports the IO status:

Slave_IO_State: Connecting to master

… and the replication does not work.


AWS provides very good documentation on how to set up the replication: Replication with a MySQL or MariaDB Instance Running External to Amazon RDS.

Follow the steps there but be aware of the following pitfall:

In step 6 `create a user that will be used for replication`: It says you should create a user for the domain ‘mydomain.com’. That will in all likelihood not work. Instead, try to find out the IP address of the Amazon RDS instance that should be the replication slave.

One way to do this is as follows:

  • Create the ‘repl_user’ for the domain ‘%’, e.g.:
CREATE USER 'repl_user'@'%' IDENTIFIED BY '<password>';
  • Also do the grants for this user
  • Open port 3306 on your server for any IP address.
  • Then the replication should work.
  • Go to your master and run the following command:
  • Find the process with the user repl_user and get the IP address from there. This is the IP address for your Amazon RDS slave server.
  • Delete the user ‘repl_user’@’%’ on the master
  • Create the user ‘repl_user’@'[IP address of slave]’ on the master
  • Modify your firewall of your master to only accept connections on port 3306 from the IP address of the slave.
  • Restart replication with
call mysql.rds_stop_replication;
call mysql.rds_start_replication;
  • And check the status with
show slave status\G

The slave IO status should now be “Waiting for master to send event”.