PostgreSQL and MongoDB Software Collections: Three easy steps to get started

In the first part of my series on Software Collections (SCL), I gave general information and listed the three steps needed to get started with SCL for a number of languages. This post covers the steps for PostgreSQL and MongoDB.

Enable the SCL repository

The first step is to enable the SCL software repository if you haven’t already done so. As the root user run:

# subscription-manager repos --enable rhel-server-rhscl-7-rpms

Now onto installing the database software.


PostgreSQL is a powerful open source, object-relational, ACID compliant, database system. PostgreSQL runs on all major operating systems. Its key features are reliability, data integrity and correctness. Recently PostgreSQL 9.5 was released as part of Red Hat Software Collections (RHSCL) 2.2. A number of earlier releases (9.2 and 9.4) are also available from RHSCL.

To install the PostgreSQL 9.5 collection, run the following command as the root user:

# yum install rh-postgresql95

Now setup PostgreSQL and create the initial database. First use scl enable to add PostgreSQL to the root user’s environment, then run setup.

# scl enable rh-postgresql95 bash
# postgresql-setup --initdb

Now start the PostgreSQL server and enable it to start up when your system boots:

# systemctl start rh-postgresql95-postgresql
# systemctl enable rh-postgresql95-postgresql

To run psql as the postgres user, you need to use su as well as scl enable in order to setup that user’s environment.

# su - postgres -c 'scl enable rh-postgresql95 -- psql'

PostgreSQL software collections as a docker formatted container image

Last but not least, you can try PostgreSQL 9.5 in a docker container. On Red Hat Enterprise Linux 7, you can get the image with the following commands:

$ docker pull

More information

To see what packages were installed  as part of the rh-postgresql95 collection, and what others are available:

# yum list rh-postgresql95\*

Note: The rh-postgresql95 collection includes the PostgreSQL server components and related client tools that match the specific server version. When building and installing client applications, it is recommended to use the the postgresql-libs package available as part of the base Red Hat Enterprise Linux system.

# yum install postgresql-libs

For more information see:


MongoDB is a cross-platform, open-source, document database designed for ease of development and scaling. Recently MongoDB 3.2 was released as part of Red Hat Software Collections (RHSCL) 2.2. A number of earlier releases (2.4, 2.6, and 3.0) are also available from RHSCL.

To install the MongoDB 3.2 collection, run the following command as the root user:

# yum install rh-mongodb32 rh-mongodb32-mongodb

Now start the mongod server and enable it to start up when your system boots. First you will need to use scl enable to add MongoDB to the root user’s environment:

# scl enable rh-mongodb32 bash
# systemctl start rh-mongodb32-mongod
# systemctl enable rh-mongodb32-mongod

To start using MongoDB, use scl enable to add it to your environment and run a bash shell:

$ scl enable rh-mongodb32 bash

You can now run the mongo client:

$ mongo

MongoDB 3.2 software collections as a docker formatted container image

Last but not least, you can try MongoDB 3.2 in a docker container. On Red Hat Enterprise Linux 7, you can get the image with the following commands:

$ docker pull

More information

The collection rh-mongodb32 delivers version 3.2 of the MongoDB server, related client tools and mongo-java-driver to connect to MongoDB server in Java. To see what packages were installed as part of the rh-mongodb32 collection, and what others are available:

# yum list rh-mongodb32\*

For more information see:

Links to other parts:

Introduction part 1

JBoss EAP 7 Domain deployments – Part 1: Set up a simple EAP Domain

Red Hat JBoss EAP 6 introduced some new concepts like configuration simplification, Modularity, new management CLI , User friendly management console  and an amazing feature called “Domains”. Domain mode change the way application are deployed on EAP instances.

JBoss EAP 7.0 was just released and announced by Red Hat.

In this blog series we will present several ways to deploy an application on an EAP Domain. The series consists of 5 parts. Each one will be a standalone article, but the series as a whole will present a range of useful topics for working with JBoss EAP.

  • Part 1: Setup a simple EAP 7.0 Domain (this article).
  • Part 2: Domain deployments through the new EAP 7.0 Management Console
  • Part 3:  Introduction to DMR (Dynamic Model Representation) and domain deployments from the Common Language Interface CLI.
  • Part 4: Domain deployment from the REST Management API.
  • Part 5: Manage EAP 6 Hosts from EAP 7.0 domain

Part 1: Setup a simple EAP 7.0 Domain.

The JBoss EAP “Domain” mode differs from traditional Standalone mode and allows you to deploy and manage EAP instances in a multi server topology. In this first article we are going to set up a JBoss EAP 7.0 domain with the following requirements:

  • 1 Domain Controller on a machine called host0
  • 1 Host Controller on a machine host1 with two EAP instances Server11 and Server12
  • 1 Host Controller on a machine host2 with Three EAP Instances Servers21, Server22 and  Server23
  • Host0 should be run as the master controller,
  • Host1 and Host2 are slaves connecting to Host0
  • Server11 and Server21 are members of the primary server group ( name=primary-server-group)
  • Server12 and Server22 belong  to the secondary server group (name=secondary-server-group)
  • Server23 is the only member of the  singleton server group ( name= singleton-server-group)
  • In real life Machine Host1, Host2 are mostly  in different physical location but for the purpose of this tutorial we are going to simulate them  on the same localhost using a signed EAP 6.4 installation and different configuration folders for each Machine.
  • To keep it simple we will not cover JVM Configuration in depth details  in this part.

Continue reading “JBoss EAP 7 Domain deployments – Part 1: Set up a simple EAP Domain”

Have your own Microservices playground

Microservices is standing at the “Peak of Inflated Expectations“. It’s immeasurable, the number of developers and companies that want to bring in this new development paradigm and don’t know what challenges they will face. Of course, the challenges and the reality of an Enterprise company that has been producing software for the last 10 or 20 years is totally different from the start-up company that just released its first software some months ago.


Before adopting microservices as an architectural pattern, there are several questions that need to be addressed:

  • Which languages and technologies should I adopt?
  • Where and how do I deploy my microservices?
  • How do I perform service-discovery in this environment?
  • How do I manage my data?
  • How do I design my application to handle failure? (Yes! It will fail!) 
  • How do I address authentication, monitoring and tracing?

    Continue reading “Have your own Microservices playground”

Announcing Red Hat JBoss Data Grid 7

We are very excited to announce General Availability (GA) of Red Hat JBoss Data Grid (JDG) 7!

JDG supercharges today’s modern applications and allows developers to meet tough requirements of high performance, availability, reliability, and elastic scale. JBoss Data Grid is compatible with the existing data tier as well as applications written in any language, using any framework and any platform via multiple APIs such as memcached, HotRod, and REST. Red Hat JBoss Data Grid empowers developers to obtain a streamlined approach to standing up new applications, avoiding the challenges normally associated with integrating applications and traditional databases.

JDG 7 introduces the following major new features:

Real-time Data Analytics

  • Distributed Streams
    JDG 7 introduces a distributed version of the Java 8 Stream API which enables you to perform rich analytics operations on data stored in JDG using the functional expressions available in the Stream API.
  • Apache Spark integration
    JDG 7 introduces a Resilient Distributed Dataset (RDD) and Discretized Stream (DStream) integration with Apache Spark version 1.6. This enables you to use JDG as a highly scalable, high-performance data source for Apache Spark, executing Spark and Spark Streaming operations on data stored in JDG.
  • Apache Hadoop Integration
    JDG 7 features a Hadoop InputFormat/OutputFormat integration, which enables use of JDG as a highly scalable, high performance data source for Hadoop. This enables use of tools from the Hadoop ecosystem which support InputFormat/OutputFormat for processing on data stored in JDG.
  • Remote Task Execution
    JDG 7 features the ability to execute tasks (business logic) on JDG Server from the Java Hot Rod client. The task can be expressed as a Java executable loaded on JDG Server or as stored JavaScript procedure which executes on the Java 8 (Nashorn) scripting engine on JDG Server.

Ease of use and administration

  • Administration Console for Server Deployments
    JDG 7 introduces a new Administration Console which enables you to view a JDG cluster and perform clustered operations across its nodes. Operations include creation of new caches and cache templates, starting or stopping the cluster, adding or removing nodes, and deploying or executing remote tasks.
  • Controlled shutdown and restart of cluster
    JDG 7 adds the ability to shutdown or restart a cluster in a controlled manner, with data restore from persistent storage.

Expanded polyglot support

  • Node.js (JavaScript) Hot Rod client
    JDG 7 introduces a new, fully supported Node.js (JavaScript) Hot Rod client, which enable you to use JDG as a high performance distributed in-memory NoSQL store from Node.js applications.
  • C++ Hot Rod client enhancements
    JDG 7 introduces asynchronous operations, querying, remote task invocation, and encryption of client/server communication using TLS/SSL, as Tech Preview features in the Hot Rod C++ client.
  • C# Hot Rod client enhancements
    JDG 7 introduces querying, remote task invocation, and encryption of client/server communication using TLS/SSL, as Tech Preview features in the Hot Rod C# client.


JDG 7 introduces a new out-of-the-box Cassandra cache store, which enables you to persist the entries of a distributed cache on a shared Apache Cassandra instance.

Additional Resources

There are many resources available on both the Customer Portal and to get more detailed information about JBoss Data Grid 7.

Connecting to a Remote database from a JWS/Tomcat application on OpenShift

One of the common requirements for Java based applications on OpenShift is to have these workloads connect back out to an enterprise database that resides outside of the OpenShift infrastructure. While OpenShift natively supports a variety of relational databases (including Postgres and MySQL) as Docker based deployments within the platform, connecting to an existing enterprise database infrastructure is preferred in many large organizations for a variety of reasons including:

  • Inherent confidence in traditional databases due to in house experience around developing and managing these databases
  • Ability to leverage existing backup/recovery procedures around these databases
  • Technical limitations with these databases in being able to be deployed in a containerized model

One of the strengths of the OpenShift platform is its ability to accommodate these “traditional” workloads so that middleware operations can take advantage of the benefits/efficiencies gained from Dockeri’zed applications while giving development teams a platform to start designing/architecting applications that would fit into more of a Microservice based pattern that would leverage a datastore such as MongoDB or MySQL that OpenShift supports.

In addition to that, another common workflow in many organizations from a deployment point of view is to externalize the database connection information so that the application can be migrated from environment to environment (example Dev to QA to Prod) with the appropriate database connection information for the various environments. In addition, these teams typically work with the application binary (.war, .ear, .jar) deployment as the artifact thats promoted between environments as opposed to Docker based images.

In this article, I will walk through an example implementation for achieving this. A sensitive aspect of this migration process are the credentials to the database, where storing credentials in clear text is frowned upon. I will cover a variety of strategies in dealing with this in a follow on article. For this example, I will be using the following project which contains the source code that I will be covering in this article.

Lets get started!

Continue reading “Connecting to a Remote database from a JWS/Tomcat application on OpenShift”

Continuous Delivery to JBoss EAP and OpenShift with the CloudBees Jenkins Platform

If you are using JBoss Enterprise Application Platform (EAP) for J2EE development, the CloudBees Jenkins Platform provides an enterprise-class toolchain for an automated CI/CD from development to production.

The CloudBees Jenkins Platform now supports integrations with both Red Hat JBoss Enterprise Application Platform (EAP) and Red Hat OpenShift across the software delivery pipeline. This enables developers to build, test and deploy applications, with Jenkins-based continuous delivery pipelines in JBoss via JBoss EAP 7 or JBoss EAP 7 on OpenShift.

The following examples are based in Jenkins Pipeline plugins, which create complex pipelines, if needed, , to model their software delivery process. If you are not familiar with with the CloudBees Jenkins Pipeline plugin you may find  these two blog posts  helpful for ramping up: Using the Pipeline Plugin to Accelerate Continuous Delivery — Part 1 and Part 2.

Let’s get started. In a typical CI/CD pipeline, your process would be similar to this one:

  • Developers commit code to the SCM, which will notify Jenkins via web-hooks.
  • Jenkins compiles the code and execute a series of test on it: static code analysis, code metrics, unit testing, etc.
  • If everything goes well, Jenkins would deploy the code to a development environment.  This step typically /may  require a manual approval depending on the use of that environment. A typical use case is having the application deployed just to be able to run further validations with tools like Selenium.
  • The steps that follow would promote the application between the various environments and to validate that the deployment was correct.

Let’s see how the build, deployment and promotion between the various environment can be done for both types of JBoss installs, to JBoss EAP7 and to JBoss EAP 7 on OpenShift,  and the differences between them.

Continue reading “Continuous Delivery to JBoss EAP and OpenShift with the CloudBees Jenkins Platform”