Installing Jenkins on Centos 7

I set up a Jenkins server on a brand new Centos 7 VPS. In the following the instructions for doing this in case you are looking at doing the same:

Setting up Jenkins Server

sudo yum install java-1.8.0-openjdk
sudo wget -O /etc/yum.repos.d/jenkins.repo
sudo rpm --import
sudo yum install jenkins

Or for stable version (link did not work for me when I tried it)

sudo wget -O /etc/yum.repos.d/jenkins.repo
sudo rpm --import
yum install jenkins
  • Start Jenkins server
sudo systemctl start jenkins

You should now be able to access Jenkins at (if not, see troubleshooting steps at the bottom).

If you want to access your server more securely on port 80, you can do so by installing ngnix as outlined in this article in step 4: How to Install Jenkins on CentOS 7.

Connecting to a Git Repo

You will probably want to connect to a git repository next. This is also somewhat dependent on the operating system you use, so I provide the steps to do this on CentOS as well:

  • Install git
sudo yum install git
  • Generate an SSH key on the server
ssh-keygen -t rsa
  • When prompted, save the SSH key under the following path (I got this idea from reading the comments here)
  • Assure that the .ssh directory is owned by the Jenkins user:
sudo chown -R jenkins:jenkins /var/lib/jenkins/.ssh
  • Copy the public generated key to your git server (or add it in the GitHub/BitBucket web interface)
  • Assure your git server is listed in the known_hosts file. In my case, since I am using BitBucket my /var/lib/jenkins/.ssh/known_hosts file contains something like the following, ssh-rsa [...]
  • You can now create a new project and use Git as the SCM. You don’t need to provide any git credentials. Jenkins pulls these automatically form the /var/lib/jenkins/.ssh directory. There are good instructions for this available here.

Connecting to GitHub

  • In the Jenkins web interface, click on Credentials and then select the Jenkins Global credentials. Add a credential for GitHub which includes your GitHub username and password.
  • In the Jenkins web interface, click on Manage Jenkins and then on Configure System. Then scroll down to GitHub and then under GitHub servers click the Advanced Button. Then click the button Manage additional GitHub actions.

additional actions

  • In the popup select Convert login and password to token and follow the prompts. This will result in a new credential having been created. Save and reload the page.
  • Now go back to the GitHub servers section and now click to add an additional server. As credential, select the credential which you have just selected.
  • In the Jenkins web interface, click on New Item and then select GitHub organisation and connect it to your user account.

Any of your GitHub projects will be automatically added to Jenkins, if they contain a Jenkinsfile. Here is an example.

Connect with BitBucket

  • First, you will need to install the BitBucket plugin.
  • After it is installed, create a normal git project.
  • Go to the Configuration for this project and select the following option:

BitBucket trigger

  • Log into BitBucket and create a webhook in the settings for your project pointing to your Jenkins server as follows: (note the slash at the end)

Testing a Java Project

Chances are high you would like to run tests against a Java project, in the following, some instructions to get that working:


  • If you cannot open the Jenkins web ui, you most likely have a problem with your firewall. Temporarily disable your firewall with: `sudo systemctl stop iptables` and see if it works then.
  • If it does, you might want to check your rules in `/etc/sysconfig/iptables` and assure that port 8080 is open
  • Check the log file at:
sudo cat /var/log/jenkins/jenkins.log


Test Latency Between Two Servers (Linux)

Today I was looking for a simple way to test the latency and bandwidth between two Linux servers.

The easiest way, of course, is to just use ping. The ping utility should be available on almost any Linux server and is extremely easy to use. Just login to one of your servers and then execute the following command using the IP address of your second server:

ping x.x.x.x

You can leave this running for a while and when you have seen enough data, just hit Ctrl + C to interrupt the program. This will result in an output such as the following:

PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=64 time=0.180 ms
64 bytes from icmp_seq=2 ttl=64 time=0.150 ms
64 bytes from icmp_seq=3 ttl=64 time=0.148 ms
64 bytes from icmp_seq=4 ttl=64 time=0.150 ms
--- ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3000ms
rtt min/avg/max/mdev = 0.148/0.157/0.180/0.013 ms

Important to note here are are the latencies for the individual tests as well as the overall average which are highlighted in bold in the above. This shows us that there is an average latency of 0.157 between the two servers tested.

In order to test the bandwidth and get some more information about latencies, you might also want to install the iperf tool.

Install PHP Application and WordPress Alongside Each Other


You have a webpage like and you would like to serve both WordPress and files from a PHP application from the root domain. For instance, opening

will open a post on WordPress and

will open a page in a PHP application.


  • Set up your php application in /var/www/html
  • Install WordPress /usr/share/wordpress
  • Put the following lines into /usr/share/wordpress/.htacces before # BEGIN WordPress.
<IfModule mod_rewrite.c>
RewriteEngine On
RewriteBase /
RewriteCond "/var/www/html%{REQUEST_URI}" -f
RewriteRule ^/?(.*)$ /app/$1 [L]


# BEGIN WordPress
  • Put the following line into /etc/httpd/conf/httpd.conf
Alias /app /var/www/html
  • Add the following configuration file to /etc/httpd/conf.d/wordpress.conf
Alias /wordpress /usr/share/wordpress
DocumentRoot /usr/share/wordpress

<Directory /usr/share/wordpress/>
 AddDefaultCharset UTF-8

AllowOverride All

Require all granted


Restart httpd and you are all done!




Versioning WordPress with Git and Revisr

WordPress is a powerful platform to just get a simple website up and running. Unfortunately, some things which are considered best practice in software development projects are a bit difficult to realize with WordPress. One of these things is versioning. Thankfully, there is a powerful plug in which enables versioning WordPress using a git repository: Revisr.

As of writing this, one is first greeted by an error message when visiting the Revisr website (something to do with SSL). It is safe to ignore this and to instruct your browser to show this website irrespective of the error message displayed (you won’t be giving any confidential information to this webiste, just browsing around).

In any case, you can download Revisr from WordPress with the following link:

Following the steps for setting it up:

  • Go to your WordPress Admin console
  • Install the plugin
  • Activate it
  • Reload  WordPress Admin console
  • Click on Revisr on the Sidebar
  • Instruct it to create a new repository

That’s already it for setting up a local git repo that will enable some versioning. However, you can also use this plugin to backup your site and all versions to a remote git repository, for instance using BitBucket. The instructions for this come as follows (assuming you are using an Apache Web server):

  • Login to your server using SSH with a user with sudo rights
  • Execute the following
sudo ssh keygen
  • Follow the prompts to create the key
  • Execute the following (where /var/www is the root dir of your Apache server)
sudo cp -r /root/.ssh /var/www

sudo chown -R apache:apache /var/www/.ssh
  • Create the file /var/www/.ssh/.htaccess and put in the following content (this is just a security measure)
Deny from all
  • Grab the public key and save it somewhere
sudo cat /var/www/.ssh/
  • Create a new account for BitBucket if you don’t have one already.
  • Add a public SSH key for your account. Add the SSH key you saved earlier.
  • Create a new repository. Grab the git SSH link for this repository.
  • Go back to your WordPress Admin console and select the Revisr plugin from the sidebar
  • Go to Settings / General. Set your username to git and define an email. Click Save
  • Go to Settings / Remote. Set Remote URL to the SSH link for the repository you saved earlier. Click Save.

Now you can go back to the main Revisr page and start committing changes and pushing to the remote repository!


Install Latest JDK on Linux Server

To install the Oracle JDK on a Linus server is often a tricky proposition. For one, the download page requires to confirm a prompt and only unlocks the download link after this prompt has been confirmed (via a cookie, I think). This makes it difficult to download the binary in the first place!

Thankfully, MaxdSre has created the following handy script to download and extract the JDK:

If you run this script, you are presented with a prompt as follows:


Just select the version you require, and the script will download and install the Oracle JDK.

Finally, you might have existing JDK versions installed on your machine which are managed using alternatives. For reference how to point your ‘java’ command to the new installation, please see this article.


Simple MySQL / MariaDB Backup

There are many ways to back up a MySQL or MariaDB server. Some ways include using mysqldump, mydumper, LVM Snapshots or XtraBackup. However, any robust backup solution boils down to one key requirements:

The ability to restore the databases to a point-in-time.

So, for instance, if your server crashes, you would like to be able to restore to the point in time before the server crashed. If data was deleted accidentally or damaged in some other way, you need to restore the data to the point in time before it was deleted or damaged.

If you use AWS RDS this ability is provided out of the box. However, you can meet this requirement much more cost effectively by using a simple VPS (such as Linode or Ramnode) with a simple setup I will describe below.

This setup will perform the following:

  • Write a log of all transactions and back up this log every 5 minutes to a remote server
  • Create a full database backup with mysqldump every day and copy this to a remote server

The backup keeps the binary logs for two days and the dumps for two weeks. Thus, any data loss should be limited to 5 minutes and full database backups should allow restoring data from up to two weeks ago.

System Landscape

  • Server 1: Runs the MariaDB or MySQL instance you want to back up
  • Server 2: Used as a remote location to store backups

(any Linux based server will do for these)

 Step 1: Enable Binary Logs

On Server 1:

  • Edit your my.cnf file (e.g. under /etc/my.cnf or /etc/my.cnf.d/server.cnf). Assert the following lines:
  • Create the folder logs in your MySQL data dir (e.g. /var/lib/mysql)
mkdir /var/lib/mysql/logs
  • Set owner to user mysql for folder logs
chown mysql:mysql /var/lib/mysql/logs
  • Restart MySQL server
sudo systemctl restart mysqld

Now a binary logs should be written into the logs folder in your MySQL data dir.

Step 2: Create Script Full Backups with MySQL dump

On Server 1:

  • Create the folder /var/lib/mysql/dumps
  • Create the script /usr/local/ and copy the contents of into this script.
  • Search for the line starting with dumpopts. In this line, provide your mysql username and password.
  • Make the script executable
sudo chmod +x /usr/local/
  • Schedule the script to run once every day using cron or systemd


30 3 * * * /usr/local/

systemd – service definition

Create /etc/systemd/system/mysql_dump.service

Description=Dumps mysql databases to backup directory


systemd – timer definition

Create /etc/systemd/system/mysql_dump.timer

Description=Run MySQL dump once per day

OnCalendar=*-*-* 03:13:00

And don’t forget to enable the timer:

sudo systemctl start mysql_dump.timer

Step 3: Write Script to Backup Files to Remote Server

On Server 2:

  • Log into your second server. Create a user mysqlbackup here:
useradd mysqlbackup
  • Change to mysqlbackup user
sudo su - mysqlbackup
  • Create directories logs and dumps
mkdir logs
mkdir dumps

On Server 1:

  • Copy public key for root user from /root/.ssh/
  • If the public key for root does not exist, run:
sudo ssh-keygen -t rsa

On Server 2:

  • While being logged in as user mysqlbackup, assure the following file exists
  • Into this file, paste the public key for root on Server 1
  • Assure correct permissions for .ssh folder:
chmod 700 .ssh
chmod 600 .ssh/authorized_keys

On Server 1:

  • Test access to Server 2 (the sudo is important here, since you want to connect as the root user). Replace with the address/IP of Server 2.
sudo ssh
  • If the SSH does not work for some reason, check this guide for more information.
  • Create the script /usr/local/ Replace with the address/IP of your server.
rsync -avz --delete /var/lib/mysql/logs
rsync -avz --delete /var/lib/mysql/dumps
  • Make the script executable
sudo chmod +x /usr/local/
  • Add the following line to the crontab for the user root
*/5 * * * * /usr/local/
  • If you are using systemd, create the following two files


Description=Backs up Mysql binary logs and full backups to remote server



Description=Run MySQL binlog backup and full backup sync every 5 minutes


Then start the timer with

sudo systemctl start mysql_backup.timer

Resize EC2 Volume (without Resizing Partition)


You would like to resize a volume attached to an EC2 instance.


Do the following:

  • Create a snapshot of your volume (instructions)
  • Stop your instance
  • Go to EBS / Volumes and select Actions / Modify Volume

Modify Vol

  • Enter the new size for your volume (note you can only ever make the volume larger) and click on Modify


  • Wait for the modification to be complete (this might take a while, like 30 min or so)
  • Start your instance

Now, if everything went well, you should have more space available on the disk for the virtual machine. To confirm this, run:

df -h

You should see the new size of the volume as the size of your main partition:



  • If the size of your partition, does not match the size of the volume, you probably need to resize your partition (instructions).
  • Resizing the partition is a very painful process, that I think should best be avoided at all costs. I think for this it helps if the EC2 instance attached to the volume is stopped when the resize is performed. Assure that this is the case before you do the resize.
  • If you forgot to stop your instance, and need to do a partition resize, there is a little workaround. Wait for six hours, then resize your volume again (this time while the instance is stopped). Then, it hopefully adjusts your partition size to the correct size.
  • In the above, you might be able to start up your instance even while the new volume is still optimizing. I haven’t tested this though but my guess is that it would work.


Sharing Folders with VirtualBox


You would like to share a folder between a VirtualBox Linux Guest and a Windows Host.


  • Start your VM
  • Switch to Windowed Mode
  • Select Devices / Shared Folders / Shared Folder Settings …
  • Create a New Folder on your Linux Guest. e.g.:
    • /home/[Your User]/WinDocuments
  • Add a new Shared folder there, e.g.
    • Share Name WinDocuments
    • Windows Folder: C:\Users\[Your User]\Documents
  • Open a Terminal and run the following:
sudo mount -t vboxsf -o rw,uid=1000,gid=1000 Documents /home/[Your User]/WinDocuments

Now you should be able to copy files to and from Windows from your Linux host.


Permission Problems

There are often tricky permission problems that prevent you from copying folders to Windows. You will get messages like ‘Access denied’ or ‘Error while copying’ or ‘You do not have permissions for this operation’.

There are many ways to fix this, but the easiest is to start your the file browser of your Linux system with sudo or, if your Linux doesn’t come with an UI, copy the files using sudo.

For instance, if you have the Nautilus file browser installed, run the following on the command line:

sudo nautilus

Then navigate to your shared folder and copy the files there.

Otherwise, copy files as follows:

sudo cp /local/file /home/[Your User]/WinDocuments

Unmounting the Folder

When you are done working with the files, simply run the following from the command line:

sudo umount /home/[Your User]/WinDocuments


Install Puppet 3 in Amazon Linux

The most recent version of the Amazon Linux VMI (2015.09.1) seems to install version 2 of Puppet by default.

However, if you need to install Puppet 3, that is also easy enough.

Just type in the following to install it:

sudo yum install puppet3

If any errors pop up in respect to incorrect dependencies (this can happen if you installed puppet 2 first), just remove these – they should be reinstalled with the correct version for puppet 3 upon running the above command again.


Remove Hard Disk in Linux in 3 Easy Steps

This guide describes how you can unlink a hard disk from Linux/Unix. This might be useful for instance if you replaced a disk image in Virtual Box or another VM.

WARNING: Do a backup of your virtual machine first or, if you are running on a physical computer, make sure you know what you are doing!

1. Assure the Hard Disk is not mounted

Edit /etc/fstab and assure there is no mount point for any partition of the hard drive.

IMPORTANT: Make sure that as many hard drives are identified by their UUID as possible since Hard Disk ids might change. See here.

2. Delete the Partition

Use fdisk as described here.

fdisk [your disk id eg /dev/sdb]

Note: You can find out the disk id by running fdisk -l (and use sudo if there is no output)

In fdisk running, input:


then input (assuming there is only one partition, otherwise give the number of a valid partition and repeat for all paritions):


3. Restart Machine

Shut down Linux

Disconnect your hard drive.

Restart Linux.

Your hard disk should be gone and no error should occur when you are staring.


When you are getting a message upon booting the machine that ‘The superblock could not be read or does not describe a correct ext2 filesystem.’ You are doing something wrong. Just reattach the hard disk in that case and Linux should start again. Make sure your other (not removed disks are identified by UUID as noted above).