Using Red Hat JBoss Developer Studio to Debug Java Applications in the Red Hat Container Development Kit

Red Hat Container Development KitIn an earlier article, Debugging Java Applications using the Red Hat Container Development Kit, it was discussed how developer productivity could be improved through the use of remotely debugging containerized Java applications running in OpenShift and the Red Hat Container Development Kit. Not only does remote debugging provide real time insight into the operation and performance of an application, but reduces the cycle time a developer may face as they are working through a solution. Included in the discussion were the steps necessary to configure both OpenShift and an integrated development environment (IDE), such as the Eclipse based Red Hat JBoss Developer Studio (RH-JBDS). While the majority of these actions were automated, there were several manual modifications, like configuring environment variables and exposing ports, that needed to be completed to enable debug functionality. Through advances in the Eclipse tooling for OpenShift, most if not all of these manual steps have been eliminated to enable a streamlined process that offers even more functionality out of the box.

Red Hat JBoss Developer Studio Integration

Enhancements made in Red Hat JBoss Developer Studio now provide full lifecycle support of the Red Hat Container Development Kit, including starting and stopping the underlying Vagrant machine. This eliminates the need for the user to execute commands inside a terminal. To start the CDK from within RH-JBDS, either use an existing workspace or open a new workspace and open the Servers view by navigating to Window -> Show View and select Servers on the menu bar. With the view now open, right click inside the view and select New -> Server and under the Red Hat JBoss Middleware folder, select Red Hat Container Development Kit. Keep the default location for the server’s host name as localhost and select a name of your choosing if desired to represent the CDK connection and select Next. On the next dialog, two items are required to be configured prior to configuring the CDK:

Continue reading “Using Red Hat JBoss Developer Studio to Debug Java Applications in the Red Hat Container Development Kit”

Use these six simple steps to get started with Red Hat JBoss Business Resource Planner

Logotype_RHJB_BRMS_CMYK_GrayRed Hat JBoss Resource Planner (part of Red Hat JBoss BRMS, the enterprise product based on the upstream OptaPlanner community project) is the leading open source constraint satisfaction solver. A constraint satisfaction solver is a solving engine build around sophisticated optimization algorithms that allows to plan for optimal use of a limited set of constrained resources.

Every organization faces scheduling problems: assign a limited set of resources, for example employees, assets, time and money, to build products or provide services. Resource Planner optimizes such planning problems to provide an optimal utilization of resources, resulting in higher productivity, less costs and higher customer satisfaction. Use cases include:

  • Vehicle Routing: What is the optimal set of routes for a fleet of vehicles to traverse in order to deliver to a given set of customers?
  • Employee Rostering: Find an optimal way to assign employees to shifts with a set of hard and soft constraints.
  • Cloud Optimization: What is the optimal assignment of processes to cloud computing resources (CPU, memory, disk)
  • Job Scheduling: Optimise the scheduling of jobs of varying processing times on a set of machines with varying processing power, trying to minimize the makespan.
  • Bin Packing:  pack objects of different volumes into a finite number of bins or containers in a way that minimizes the number of bins used.
  • and many more.

All these problems are, so called, NP-hard problems, which implies that the time required to solve these problems using any currently known algorithm increases very quickly as the size of the problem grows (e.g. adding a destination to a vehicle routing problem, adding a shift to an employee rostering problem). This is one of the principal unsolved problems in computer science today.

As it is impossible to solve these problems, or find the best solution to these problems, in a limited timespan when scaling out, Business Resource Planner uses a set of sophisticated optimization heuristics and meta-heuristics (like Tabu Search, Simulated Annealing and Late Acceptance) to find an optimal solution to these problems.

As said, every organisation has these kind of scheduling problems, and there is a lot to gain from optimising these problems. In the remainder of this post we will walk you through a number of steps to get you started with Business Resource Planner/OptaPlanner to find an optimal solution to your business problem and start increasing productivity, reducing costs and increasing customer satisfaction.

Continue reading “Use these six simple steps to get started with Red Hat JBoss Business Resource Planner”

Screen Shot 2016-08-12 at 3.22.50 PM

What’s New in Jenkins 2.0

If you like pipelines—specifically the kind that facilitate continuous software delivery, not the ones that drain stuff from your kitchen sink—you’ll love Jenkins 2.0. Pipelines-as-code are one of the headline features in the latest version of Jenkins.

Keep reading for more on pipelines and other cool enhancements in Jenkins 2.0, and how you can take advantage of them.

Jenkins 2.0: New Features Outline

Released in April, Jenkins 2.0 is the much-updated version of the open source continuous integration and delivery platform that DevOps teams have known and loved since it first appeared five years ago as a fork of Oracle’s Hudson tool.

The new features in Jenkins 2.0 fall into two main categories. The first consists of usability enhancements involving the interface. The second involves new technical features, which center mostly on delivery pipelines that can be defined as code.

I’ll outline both of these categories below. For the second, I’ll delve into some technical details by explaining how pipelines work in Jenkins 2.0.

Usability Tweaks

There’s not too much to say from a DevOps perspective about changes in the first category. They primarily involve redesign of part of the Jenkins GUI. For example, the job configuration page now looks like this:

Screen Shot 2016-08-12 at 3.23.17 PM
source: jenkins.io

Jenkins developers also worked to improve usability for the 2.0 release by simplifying the plugin experience. A basic set of “suggested” plugins is now installed by default. The developers say they made this change so that new Jenkins users can get up and running more quickly, without worrying about wrapping their heads around all of the platform’s plugins right away.

Using Pipelines in Jenkins 2.0

If you’re a developer, the most interesting part of the new Jenkins release will probably be pipelines. The vision behind the pipelines, according to Jenkins developers, is to provide a way “to model, orchestrate and visualize [the] entire delivery pipeline.”

The advantage of pipelines is that they make it easy to script continuous delivery. Instead of running build jobs on an irregular basis, you can use pipelines to define the whole build process in a simple script, which is broken down into distinct components (defined by “steps,” “nodes” and “stages”) for each part of the process. And you can run the same pipeline script on an ongoing basis.

Pipelines also support integration with other Jenkins plugins, and they persist across instances of your Jenkins master.

Using Pipelines in Jenkins 2.0 involves just a few basic steps:

  1. Install the Pipeline plugin into your environment, if it is not already there. (Jenkins versions 2.0 and later should include the Pipeline plugin by default.)
  2. Write your pipeline script either by entering the code directly into the Jenkins Web interface or inserting it into a Jenkinsfile that you check into your source code repository.
  3. Click “Build Now” in Jenkins to create your pipeline.

The syntax of pipeline scripts is fairly simple, and the format should be familiar to anyone with basic scripting or Java programming experience. For details, check out the Jenkins pipeline documentation.

You can monitor the progress of builds that you have configured through pipelines using Stage View, a new part of the Jenkins interface.

Things That Could Be Better

Alas, Jenkins 2.0 pipelines are not perfect. (What is?) One drawback is that the script syntax has to vary a bit depending on whether you run Jenkins on Windows or Linux. In the latter case, you would make calls to sh and use Unix-style file separators, whereas with Windows you would use bat and backslashes (which you have to escape, meaning \ becomes \ in a file path) to identify file locations. This is not ideal.

But it’s also not a big deal, and it probably won’t matter much to most people. Overall, pipelines are a great way to turn Jenkins 2.0 into a platform not just for continuous integration, but for continuous delivery as well.

Featured image source: servicemasterofbaltimore.com

About Hemant Jain

Screen Shot 2016-08-12 at 2.52.10 PMHemant Jain is the founder and owner of Rapidera Technologies, a full service software development shop. He and his team focus a lot on modern software delivery techniques and tools. Prior to Rapidera he managed large scale enterprise development projects at Autodesk and Deloitte.

Setting up a LAMP stack on Red Hat Enterprise Linux

You obviously know what a LAMP stack is if you’ve managed to find your way here, but for those who may be unsure, the key is in the name (L)inux (A)pache (M)ariaDB (P)HP—a term that has become synonymous around the globe for building a basic web server with database and PHP functionality. There are a myriad of web applications, ranging from WordPress to Joomla to Magento that all use this setup, and if you know how to get it up and running, then you’re off to a great start. It couldn’t be easier with RHEL, so let’s get started. MariaDB can also be exchanged for MySQL or a database of your choice.

Our Objectives

  • Set up a Red Hat Enterprise Linux (RHEL) 7.2 virtual machine
  • Install required applications (Apache, MariaDB, PHP)
  • Configure an initial virtual host in Apache
  • Configure MySQL and create a database for testing
  • Demonstrate PHP working with a test page, which also pulls data from our test database

 

Installing RHEL on a VM

To get started, I’m firing up a virtual machine with the following specifications:

  • 1GB RAM
  • 16GB virtual hard drive space
  • 1 vCPU

Now it’s time to power up our VM and let it boot from the RHEL ISO. Once you’ve booted into the setup GUI, you’ll be asked some basic questions. In my case, I simply selected my time zone and specified my network settings. I would suggest leaving everything else at default for simplicity.

Screen Shot 2016-08-12 at 3.13.33 PM

Once RHEL has successfully installed, you can reboot into your new installation. As we have left the default of “minimal install” selected, we’ll need to manually register the system to the Red Hat network and attach it to a subscription to allow it to receive updates and packages. Simply log in and run subscription-manager register –auto-attach and you will be prompted to enter your username and password.

Installing required applications

Great! Before getting started, I would first recommend you run yum –y update to grab any recent security updates and reboot.

Now we’re ready to install Apache, MariaDB and PHP. Simply run:

yum –y install httpd php php-mysql mariadb mariadb-server

Then wait for yum (Yellowdog Updater, Modified) to do its thing. After yum has finished, we want to make sure that our newly installed applications are set to start at boot. To do this, we run:

systemctl enable httpd && systemctl enable mariadb 
systemctl start mariadb && systemctl start httpd

This will get them all up and running for the first time.

Configure Virtual Host in Apache

This step isn’t strictly necessary if you’re only wanting to run one application on your server, but I always try to get a virtual host configured as it keeps things tidy and allows for easy expansion for hosting other websites on your LAMP server in the future if you feel like doing so.

So let’s go ahead and create a new virtual host—but first, let’s create a directory for this virtual host to serve files from. And whilst we’re at it, we might as well add a ‘phpinfo’ file there to validate our PHP configuration.

mkdir /var/www/test-site && echo –e “<?php \nphpinfo();” > /var/www/test-site/index.php

Creating the virtual host is easy. Let’s create a new file named /etc/httpd/conf.d/test-site.conf and add the following to it:

<VirtualHost *:80>
 DocumentRoot “/var/www/test-site”
 ServerName test-site.example.com
</VirtualHost>

If you’re following this guide exactly, then you’ll need to add a host entry on your local computer to make sure it knows where ‘test-site.example.com’ exists. Simply take the IP address of the server you’re configuring your LAMP stack on and insert it into your host’s file (where x.x.x.x is the server IP):

x.x.x.x test-site test-site.example.com

Now you’re ready to browse to your new LAMP server—but wait, your page load times out and can’t connect. You need to allow the web traffic through the firewall with the following command:

firewall-cmd –zone=public –add-service=http

If everything goes to plan, you should now see a phpinfo screen confirming that Apache and PHP are set up and working together.

Configure MySQL and create database for testing

Although MariaDB is now up and running, it’s worth running /usr/bin/mysql_secure_installation to secure MariaDB further. You should do the following when prompted:

  • Set the root password for access to MariaDB
  • Remove anonymous users
  • Disallow root login remotely (unless you want to be able to connect to it remotely, of course)
  • Remove test database and access to it
  • Reload privilege tables

Great! Now we want to go ahead and make sure that a PHP application running on our LAMP server can access a database in MariaDB and see tables. Firstly, we’ll need to create a database for testing and create some tables. To do this, we need to first connect to MariaDB with our root username and password. I have included a screenshot of this below to show you what sort of output to expect. Upon logging in with ‘mysql –uroot –p’ you’ll need to run the following commands:

  • create database test;
  • use test;
  • create table example ( id INT, data VARCHAR(100) );
  • create table example1 (id INT, data VARCHAR(100) );
  • create table example2 (id INT, data VARCHAR(100) );
  • create user ‘test-site’@’localhost’ identified by ‘password’;
  • grant all on test.* to ‘test-site’@’localhost’;
  • flush privileges;

Screen Shot 2016-08-12 at 3.14.05 PM

In the above example, we’re creating a database, creating three example tables within this database, and creating a user with limited access to this database. We don’t want to use our root user MariaDB credentials for any interaction between our web application and the database, as this is insecure.

Now that we’ve got our database and tables set up, let’s delete our old index.php file and recreate it with the following PHP code. If you’ve been following this guide exactly, then you’ll be good to go with the existing ‘dbname’, ‘dbuser’, ‘dbpass’ and ‘dbhost’ variables as set below. But If not, then you’ll simply need to change these to match your chosen credentials and database name.

<?php
$dbname = 'test';
$dbuser = 'test-site';
$dbpass = 'password';
$dbhost = 'localhost';
$connect = mysql_connect($dbhost, $dbuser, $dbpass) or die("Unable to Connect to '$dbhost'");
mysql_select_db($dbname) or die("Could not open the db '$dbname'");
$test_query = "SHOW TABLES FROM $dbname";
$result = mysql_query($test_query);
$tblCnt = 0;
while($tbl = mysql_fetch_array($result)) {
  $tblCnt++;
}
if (!$tblCnt) {
  echo "There are no tables<br />\n";
} else {
  echo "There are $tblCnt tables<br />\n";
}
?>

If everything has gone to plan, then the next time you browse to your server you should see the following:

Screen Shot 2016-08-12 at 3.16.06 PM

Final Thoughts

So there you have it. Setting up RHEL to serve your PHP application with a database backend couldn’t be easier! Adding additional sites to your Apache configuration is easy and can be done by simply adding additional VirtualHost config files in the manner shown on page 2. You can go more in-depth by adding additional configuration parameters to each virtual host. For instance, you may wish for ‘test-site.example.com’ to show a directory index but wish to prevent ‘test-site2.example.com’ from exhibiting this same behaviour.

Resources

 

About Keith Rogers

Screen Shot 2016-08-12 at 2.01.03 PMKeith Rogers is an IT professional with over 10 years’ experience in modern development practices. Has built full development stacks. Currently he works for broadcasting organization  in the DevOps space with a focus on automation. In his spare time he tinkers with modern development tools, and a technical contributes Fixate IO.

 

Lightweight Application Instrumentation with PCP

Wait… what?

I was involved in diagnosing a production system performance problem: a web application serving thousands of interactive users was acting up.  Symptoms included significant time running kernel code on behalf of the application (unexpectedly), and at those times substantial delays were observed by end users.

As someone with a systems programming background, I figured I had a decent shot at figuring this one out. Naively I reached for strace(1), the system call and signal tracer, to provide insights (this was long before perf(1) came along, in my defence).

Firing up strace, however, things rapidly went from bad to oh-so-much-worse, with the application becoming single threaded and almost entirely locking up under ptrace(2) control. Nothing was able to return responsiveness once that flat spin had been induced. Sadly an unscheduled downtime resulted, and I wandered off to lick my wounds, wondering what on earth just happened.

Why?

Without going into the details of what actually happened, nor the weird and wonderful things that are going on under the hood inside strace – suffice to say this was a pathological scenario and strace was certainly the wrong tool for the job. Hindsight is 20/20!

However, lesson learned – and it’s not only strace of course – there are many analysis tools which take the behavior modifying approach of “switch on special/new code paths, export lots of special/new diagnostics” that can make production system failure situations far, far worse.

The kernel and many system services provide a wealth of always-enabled instrumentation, and in my experience it provides good return on investment when business-critical applications to do the same. Knowing that counters, gauges and other measures are always there, always updated, and – ideally – always being sampled and recorded, builds high levels of confidence in their safety and at acceptable (known, fixed, low) costs.

How?

There are many different projects and APIs for instrumenting applications, with a variety of different design goals, trade-offs and overheads. Many articles have been devoted to the sorts of things worth instrumenting within an application, so lets skip over that (extremely important!) topic here and instead focus on underlying mechanisms.

One thing to note first up is that all the approaches require some form of inter-process communication mechanism, to get the metric values out of the application address space and into the monitoring tools – this can involve varying degrees of memory copying, context switching, synchronization and various other forms of impact on the running application.

In the Performance Co-Pilot (pcp.io) toolkit the MMV – “Memory Mapped Value” – approach tackles this issue of providing low-cost, lightweight metric value extraction from running applications.

The approach is built around shared memory, where the application registers metrics and is assigned fixed memory locations for the safe manipulation of each metric value. The application is then left to update each in-memory value according to its needs and the semantics of each metric.

The memory locations are allocated, and fixed, in such a way that they can also be safely accessed by separate (collector, monitoring and/or analysis) processes. Naturally, the overheads of actually counting events, accumulating byte counts, gauging utilization and so on cannot be removed, but the goal is to make that the only cost incurred.

pcp-mmap-diagram
Application instrumentation via shared memory mappings

In the MMV model, at the points where metrics are updated, the only cost involved is the memory mapping update, which is a single memory store operation. There is no need to explicitly transfer control to any another thread or process, nor allocate memory, nor make system or library calls. The external PCP sampling process(es) will only sample values at times driven by those tools, placing no overhead on the instrumented application.

The other good news is the MMV approach scales well as metrics are added; applications with many hundreds of metrics are able to update values with the same overheads as lightly instrumented applications.

On the other hand, to attain this level of performance there are trade-offs being made. Its assumed that always-enabled sampling is the analysis model (so this technique is not suited to event tracing, which is more the domain of complementary approaches like dtrace, ETW, LTTng and systemtap).  So it is not suited for compound data structures. But for the kinds of performance values we’re looking at here, where each metric is usually an independent numeric value, this proves to be a worthwhile trade-off in practice for always-enabled instrumentation.

Where?  When?

All Red Hat Enterprise Linux releases since 6.6 onward include MMV as an instrumentation approach you can use.  Sample instrumented application code is available in the pcp-devel package.

The service involved with reading the memory mappings is pmcd(1) and its pmdammv(1) shared library helper.  Many PCP tools exist that will record, visualize, infer and report on your new application metrics.

High-level language projects that generate MMV mappings natively (Speed for Golang, and Parfait for Java) are also available from Github and Maven Central.

Provisioning Vagrant boxes using Ansible

Ansible serves as a great tool for those system administrators who are trying to automate the task of system administration. From automating the task of configuration management to provisioning and managing containers for application deployments, Ansible makes it easy. In this article, we will see how we can use Ansible to provision Vagrant boxes.

So, what exactly is a Vagrant box? In simple terms, we can think of a vagrant box as a virtual machine prepackaged with the development tools we require to run our development environment. We can use these boxes to distribute the development environment which the other team members can use to work on the projects. Using Ansible, we can automate the task of provisioning the Vagrant boxes with our development packages. So, let’s see how we can do this.

For this tutorial, I am using Fedora 24 as my host system and Ubuntu 14.04 (Trusty) as my Vagrant box.

Editor’s note: If you want to run get started with Vagrant to provision or build containers using Red Hat Enterprise Linux with just a few clicks, check out the Red Hat Container Development Kit.

Continue reading “Provisioning Vagrant boxes using Ansible”