Screen Shot 2016-08-12 at 3.22.50 PM

What’s New in Jenkins 2.0

If you like pipelines—specifically the kind that facilitate continuous software delivery, not the ones that drain stuff from your kitchen sink—you’ll love Jenkins 2.0. Pipelines-as-code are one of the headline features in the latest version of Jenkins.

Keep reading for more on pipelines and other cool enhancements in Jenkins 2.0, and how you can take advantage of them.

Jenkins 2.0: New Features Outline

Released in April, Jenkins 2.0 is the much-updated version of the open source continuous integration and delivery platform that DevOps teams have known and loved since it first appeared five years ago as a fork of Oracle’s Hudson tool.

The new features in Jenkins 2.0 fall into two main categories. The first consists of usability enhancements involving the interface. The second involves new technical features, which center mostly on delivery pipelines that can be defined as code.

I’ll outline both of these categories below. For the second, I’ll delve into some technical details by explaining how pipelines work in Jenkins 2.0.

Usability Tweaks

There’s not too much to say from a DevOps perspective about changes in the first category. They primarily involve redesign of part of the Jenkins GUI. For example, the job configuration page now looks like this:

Screen Shot 2016-08-12 at 3.23.17 PM

Jenkins developers also worked to improve usability for the 2.0 release by simplifying the plugin experience. A basic set of “suggested” plugins is now installed by default. The developers say they made this change so that new Jenkins users can get up and running more quickly, without worrying about wrapping their heads around all of the platform’s plugins right away.

Using Pipelines in Jenkins 2.0

If you’re a developer, the most interesting part of the new Jenkins release will probably be pipelines. The vision behind the pipelines, according to Jenkins developers, is to provide a way “to model, orchestrate and visualize [the] entire delivery pipeline.”

The advantage of pipelines is that they make it easy to script continuous delivery. Instead of running build jobs on an irregular basis, you can use pipelines to define the whole build process in a simple script, which is broken down into distinct components (defined by “steps,” “nodes” and “stages”) for each part of the process. And you can run the same pipeline script on an ongoing basis.

Pipelines also support integration with other Jenkins plugins, and they persist across instances of your Jenkins master.

Using Pipelines in Jenkins 2.0 involves just a few basic steps:

  1. Install the Pipeline plugin into your environment, if it is not already there. (Jenkins versions 2.0 and later should include the Pipeline plugin by default.)
  2. Write your pipeline script either by entering the code directly into the Jenkins Web interface or inserting it into a Jenkinsfile that you check into your source code repository.
  3. Click “Build Now” in Jenkins to create your pipeline.

The syntax of pipeline scripts is fairly simple, and the format should be familiar to anyone with basic scripting or Java programming experience. For details, check out the Jenkins pipeline documentation.

You can monitor the progress of builds that you have configured through pipelines using Stage View, a new part of the Jenkins interface.

Things That Could Be Better

Alas, Jenkins 2.0 pipelines are not perfect. (What is?) One drawback is that the script syntax has to vary a bit depending on whether you run Jenkins on Windows or Linux. In the latter case, you would make calls to sh and use Unix-style file separators, whereas with Windows you would use bat and backslashes (which you have to escape, meaning \ becomes \ in a file path) to identify file locations. This is not ideal.

But it’s also not a big deal, and it probably won’t matter much to most people. Overall, pipelines are a great way to turn Jenkins 2.0 into a platform not just for continuous integration, but for continuous delivery as well.

Featured image source:

About Hemant Jain

Screen Shot 2016-08-12 at 2.52.10 PMHemant Jain is the founder and owner of Rapidera Technologies, a full service software development shop. He and his team focus a lot on modern software delivery techniques and tools. Prior to Rapidera he managed large scale enterprise development projects at Autodesk and Deloitte.

Setting up a LAMP stack on Red Hat Enterprise Linux

You obviously know what a LAMP stack is if you’ve managed to find your way here, but for those who may be unsure, the key is in the name (L)inux (A)pache (M)ariaDB (P)HP—a term that has become synonymous around the globe for building a basic web server with database and PHP functionality. There are a myriad of web applications, ranging from WordPress to Joomla to Magento that all use this setup, and if you know how to get it up and running, then you’re off to a great start. It couldn’t be easier with RHEL, so let’s get started. MariaDB can also be exchanged for MySQL or a database of your choice.

Our Objectives

  • Set up a Red Hat Enterprise Linux (RHEL) 7.2 virtual machine
  • Install required applications (Apache, MariaDB, PHP)
  • Configure an initial virtual host in Apache
  • Configure MySQL and create a database for testing
  • Demonstrate PHP working with a test page, which also pulls data from our test database


Installing RHEL on a VM

To get started, I’m firing up a virtual machine with the following specifications:

  • 1GB RAM
  • 16GB virtual hard drive space
  • 1 vCPU

Now it’s time to power up our VM and let it boot from the RHEL ISO. Once you’ve booted into the setup GUI, you’ll be asked some basic questions. In my case, I simply selected my time zone and specified my network settings. I would suggest leaving everything else at default for simplicity.

Screen Shot 2016-08-12 at 3.13.33 PM

Once RHEL has successfully installed, you can reboot into your new installation. As we have left the default of “minimal install” selected, we’ll need to manually register the system to the Red Hat network and attach it to a subscription to allow it to receive updates and packages. Simply log in and run subscription-manager register –auto-attach and you will be prompted to enter your username and password.

Installing required applications

Great! Before getting started, I would first recommend you run yum –y update to grab any recent security updates and reboot.

Now we’re ready to install Apache, MariaDB and PHP. Simply run:

yum –y install httpd php php-mysql mariadb mariadb-server

Then wait for yum (Yellowdog Updater, Modified) to do its thing. After yum has finished, we want to make sure that our newly installed applications are set to start at boot. To do this, we run:

systemctl enable httpd && systemctl enable mariadb 
systemctl start mariadb && systemctl start httpd

This will get them all up and running for the first time.

Configure Virtual Host in Apache

This step isn’t strictly necessary if you’re only wanting to run one application on your server, but I always try to get a virtual host configured as it keeps things tidy and allows for easy expansion for hosting other websites on your LAMP server in the future if you feel like doing so.

So let’s go ahead and create a new virtual host—but first, let’s create a directory for this virtual host to serve files from. And whilst we’re at it, we might as well add a ‘phpinfo’ file there to validate our PHP configuration.

mkdir /var/www/test-site && echo –e “<?php \nphpinfo();” > /var/www/test-site/index.php

Creating the virtual host is easy. Let’s create a new file named /etc/httpd/conf.d/test-site.conf and add the following to it:

<VirtualHost *:80>
 DocumentRoot “/var/www/test-site”

If you’re following this guide exactly, then you’ll need to add a host entry on your local computer to make sure it knows where ‘’ exists. Simply take the IP address of the server you’re configuring your LAMP stack on and insert it into your host’s file (where x.x.x.x is the server IP):

x.x.x.x test-site

Now you’re ready to browse to your new LAMP server—but wait, your page load times out and can’t connect. You need to allow the web traffic through the firewall with the following command:

firewall-cmd –zone=public –add-service=http

If everything goes to plan, you should now see a phpinfo screen confirming that Apache and PHP are set up and working together.

Configure MySQL and create database for testing

Although MariaDB is now up and running, it’s worth running /usr/bin/mysql_secure_installation to secure MariaDB further. You should do the following when prompted:

  • Set the root password for access to MariaDB
  • Remove anonymous users
  • Disallow root login remotely (unless you want to be able to connect to it remotely, of course)
  • Remove test database and access to it
  • Reload privilege tables

Great! Now we want to go ahead and make sure that a PHP application running on our LAMP server can access a database in MariaDB and see tables. Firstly, we’ll need to create a database for testing and create some tables. To do this, we need to first connect to MariaDB with our root username and password. I have included a screenshot of this below to show you what sort of output to expect. Upon logging in with ‘mysql –uroot –p’ you’ll need to run the following commands:

  • create database test;
  • use test;
  • create table example ( id INT, data VARCHAR(100) );
  • create table example1 (id INT, data VARCHAR(100) );
  • create table example2 (id INT, data VARCHAR(100) );
  • create user ‘test-site’@’localhost’ identified by ‘password’;
  • grant all on test.* to ‘test-site’@’localhost’;
  • flush privileges;

Screen Shot 2016-08-12 at 3.14.05 PM

In the above example, we’re creating a database, creating three example tables within this database, and creating a user with limited access to this database. We don’t want to use our root user MariaDB credentials for any interaction between our web application and the database, as this is insecure.

Now that we’ve got our database and tables set up, let’s delete our old index.php file and recreate it with the following PHP code. If you’ve been following this guide exactly, then you’ll be good to go with the existing ‘dbname’, ‘dbuser’, ‘dbpass’ and ‘dbhost’ variables as set below. But If not, then you’ll simply need to change these to match your chosen credentials and database name.

$dbname = 'test';
$dbuser = 'test-site';
$dbpass = 'password';
$dbhost = 'localhost';
$connect = mysql_connect($dbhost, $dbuser, $dbpass) or die("Unable to Connect to '$dbhost'");
mysql_select_db($dbname) or die("Could not open the db '$dbname'");
$test_query = "SHOW TABLES FROM $dbname";
$result = mysql_query($test_query);
$tblCnt = 0;
while($tbl = mysql_fetch_array($result)) {
if (!$tblCnt) {
  echo "There are no tables<br />\n";
} else {
  echo "There are $tblCnt tables<br />\n";

If everything has gone to plan, then the next time you browse to your server you should see the following:

Screen Shot 2016-08-12 at 3.16.06 PM

Final Thoughts

So there you have it. Setting up RHEL to serve your PHP application with a database backend couldn’t be easier! Adding additional sites to your Apache configuration is easy and can be done by simply adding additional VirtualHost config files in the manner shown on page 2. You can go more in-depth by adding additional configuration parameters to each virtual host. For instance, you may wish for ‘’ to show a directory index but wish to prevent ‘’ from exhibiting this same behaviour.



About Keith Rogers

Screen Shot 2016-08-12 at 2.01.03 PMKeith Rogers is an IT professional with over 10 years’ experience in modern development practices. Has built full development stacks. Currently he works for broadcasting organization  in the DevOps space with a focus on automation. In his spare time he tinkers with modern development tools, and a technical contributes Fixate IO.


Lightweight Application Instrumentation with PCP

Wait… what?

I was involved in diagnosing a production system performance problem: a web application serving thousands of interactive users was acting up.  Symptoms included significant time running kernel code on behalf of the application (unexpectedly), and at those times substantial delays were observed by end users.

As someone with a systems programming background, I figured I had a decent shot at figuring this one out. Naively I reached for strace(1), the system call and signal tracer, to provide insights (this was long before perf(1) came along, in my defence).

Firing up strace, however, things rapidly went from bad to oh-so-much-worse, with the application becoming single threaded and almost entirely locking up under ptrace(2) control. Nothing was able to return responsiveness once that flat spin had been induced. Sadly an unscheduled downtime resulted, and I wandered off to lick my wounds, wondering what on earth just happened.


Without going into the details of what actually happened, nor the weird and wonderful things that are going on under the hood inside strace – suffice to say this was a pathological scenario and strace was certainly the wrong tool for the job. Hindsight is 20/20!

However, lesson learned – and it’s not only strace of course – there are many analysis tools which take the behavior modifying approach of “switch on special/new code paths, export lots of special/new diagnostics” that can make production system failure situations far, far worse.

The kernel and many system services provide a wealth of always-enabled instrumentation, and in my experience it provides good return on investment when business-critical applications to do the same. Knowing that counters, gauges and other measures are always there, always updated, and – ideally – always being sampled and recorded, builds high levels of confidence in their safety and at acceptable (known, fixed, low) costs.


There are many different projects and APIs for instrumenting applications, with a variety of different design goals, trade-offs and overheads. Many articles have been devoted to the sorts of things worth instrumenting within an application, so lets skip over that (extremely important!) topic here and instead focus on underlying mechanisms.

One thing to note first up is that all the approaches require some form of inter-process communication mechanism, to get the metric values out of the application address space and into the monitoring tools – this can involve varying degrees of memory copying, context switching, synchronization and various other forms of impact on the running application.

In the Performance Co-Pilot ( toolkit the MMV – “Memory Mapped Value” – approach tackles this issue of providing low-cost, lightweight metric value extraction from running applications.

The approach is built around shared memory, where the application registers metrics and is assigned fixed memory locations for the safe manipulation of each metric value. The application is then left to update each in-memory value according to its needs and the semantics of each metric.

The memory locations are allocated, and fixed, in such a way that they can also be safely accessed by separate (collector, monitoring and/or analysis) processes. Naturally, the overheads of actually counting events, accumulating byte counts, gauging utilization and so on cannot be removed, but the goal is to make that the only cost incurred.

Application instrumentation via shared memory mappings

In the MMV model, at the points where metrics are updated, the only cost involved is the memory mapping update, which is a single memory store operation. There is no need to explicitly transfer control to any another thread or process, nor allocate memory, nor make system or library calls. The external PCP sampling process(es) will only sample values at times driven by those tools, placing no overhead on the instrumented application.

The other good news is the MMV approach scales well as metrics are added; applications with many hundreds of metrics are able to update values with the same overheads as lightly instrumented applications.

On the other hand, to attain this level of performance there are trade-offs being made. Its assumed that always-enabled sampling is the analysis model (so this technique is not suited to event tracing, which is more the domain of complementary approaches like dtrace, ETW, LTTng and systemtap).  So it is not suited for compound data structures. But for the kinds of performance values we’re looking at here, where each metric is usually an independent numeric value, this proves to be a worthwhile trade-off in practice for always-enabled instrumentation.

Where?  When?

All Red Hat Enterprise Linux releases since 6.6 onward include MMV as an instrumentation approach you can use.  Sample instrumented application code is available in the pcp-devel package.

The service involved with reading the memory mappings is pmcd(1) and its pmdammv(1) shared library helper.  Many PCP tools exist that will record, visualize, infer and report on your new application metrics.

High-level language projects that generate MMV mappings natively (Speed for Golang, and Parfait for Java) are also available from Github and Maven Central.

Provisioning Vagrant boxes using Ansible

Ansible serves as a great tool for those system administrators who are trying to automate the task of system administration. From automating the task of configuration management to provisioning and managing containers for application deployments, Ansible makes it easy. In this article, we will see how we can use Ansible to provision Vagrant boxes.

So, what exactly is a Vagrant box? In simple terms, we can think of a vagrant box as a virtual machine prepackaged with the development tools we require to run our development environment. We can use these boxes to distribute the development environment which the other team members can use to work on the projects. Using Ansible, we can automate the task of provisioning the Vagrant boxes with our development packages. So, let’s see how we can do this.

For this tutorial, I am using Fedora 24 as my host system and Ubuntu 14.04 (Trusty) as my Vagrant box.

Editor’s note: If you want to run get started with Vagrant to provision or build containers using Red Hat Enterprise Linux with just a few clicks, check out the Red Hat Container Development Kit.

Continue reading “Provisioning Vagrant boxes using Ansible”

Build your next cloud-based PaaS in under an hour

The charter of Open Innovation Labs is to help our customers accelerate application development and realize the latest advancements in software delivery, by providing skills, mentoring, and tools. Some of the challenges I frequently hear from customers are those around Platform as a Service (PaaS) environment provisioning and configuration. This article is first in the series of articles that guide you through installation configuration and usage of the Red Hat Open Container Project (OCP) on Amazon Web Services (AWS).

This installation addresses cloud based security group creation, Amazon Route 53 based DNS, creates a server farm that survives power cycles, and configures OCP for web based authentication and persistent registry. This article and companion video (view below) eliminates the pain-points of a push button installation and validation of a four node Red Hat OCP cluster on AWS.

By the end of the tutorial, you should have a working Red Hat OCP PaaS that is ready to facilitate your team’s application development and DevOps pipeline.

Please note: The setup process uses Red Hat Ansible and an enhanced version of the openshift-ansible aws community installer.

Continue reading “Build your next cloud-based PaaS in under an hour”

Screen Shot 2016-08-12 at 2.55.14 PM

Red Hat Software Collections: Why They’re Awesome, and How to Use Them

Red Hat Software Collections can make your life as a programmer or admin immensely easier.

Like death, taxes and zombies, dealing with different versions of software is something you just can’t avoid. It’s a nasty but necessary fact of life.

Traditionally, when developers and system admins grapple with this issue, they have to sacrifice something. If you want to run the latest and greatest version of a web app, it might not support users with outdated browsers. If you install the newest beta release of Python so you can test development code, it might break Python scripts written for older releases. If you have a system with multiple users, you might want a different version of Ruby over another. And so on.

Software Collections provide a solution to conundrums like these. They let you have your cake and eat it, too.

In other (more technical) words, Software Collections make it possible to have multiple versions of the same software on the same system. You use a simple tool to tell the system which version to activate as needed.

If that sounds awesome, it is. Keep reading for a more detailed explanation of how Software Collections work, and an overview of using them on your Red Hat system.

Continue reading “Red Hat Software Collections: Why They’re Awesome, and How to Use Them”