dotnet_logo_265px

Observations on Porting from .NET Framework to .NET Core

You’ve heard that .NET has gone open source. You’ve also heard that it has gone cross-platform. And you’ve even heard that Red Hat is shipping a supported version of .NET on Red Hat Enterprise Linux. So maybe you are thinking to yourself, “wow, this is fantastic! I’m going to copy these EXEs and DLLs of my .NET application over to my Red Hat machine and run them!”

Well, unfortunately, it’s not going to be quite that easy. At least not today.

First and foremost, the open source version of .NET is called “.NET Core.” It is available for many platforms, including Windows and Linux. Those .NET projects and applications that you already have running, however, were built on and for .NET Framework. And .NET Framework and .NET Core are not the same thing; they are more like siblings, which also implies that one is not a subset or child of the other.

“Well, then, what’s the point?!”

The good news is that while they are siblings, they do look a lot alike. Although they’re not identical twins, you’ll definitely recognize them as being from the same immediate family. As such, it is possible to port many existing .NET Framework applications to .NET Core.

“How hard is it to port something?”

Continue reading “Observations on Porting from .NET Framework to .NET Core”

Red Hat JBoss Data Virtualization on OpenShift: Part 2 – Service enable your data

Welcome to the part 2 of Red Hat JBoss Data Virtualization (JDV) running on OpenShift.

JDV is a lean, virtual data integration solution that unlocks trapped data and delivers it as easily consumable, unified, and actionable information. JDV makes data spread across physically diverse systems such as multiple databases, XML files, and Hadoop systems appear as a set of tables in a local database.

When deployed on OpenShift, JDV enables:

  1. Service enabling your data
  2. Bringing data from outside to inside the PaaS
  3. Breaking up monolithic data sources virtually for a microservices architecture

Together with the JDV for OpenShift image, we have made available OpenShift templates that allow you to test and bootstrap JDV.

Introduction

In part 1 we described how to get started with JDV running on OpenShift. During the build phase of the pod several artifacts were downloaded from the provided GitHub URL in the JDV OpenShift template. We deployed two virtual databases (VDBs) called country-ws (external web service-based datasource) and marketdata-file (file-based datasource).

Continue reading “Red Hat JBoss Data Virtualization on OpenShift: Part 2 – Service enable your data”

images_products_datavirt_mi-jdv-diagram-us98753at-201608-01-01-1

Unlock your MariaDB/MySQL data with Red Hat JBoss Data Virtualization

Welcome back to a new episode of the series: “Unlock your [….] data with Red Hat JBoss Data Virtualization.” Through this blog series, we will look at how to connect Red Hat JBoss Data Virtualization (JDV) to different and heterogenous data sources.

JDV is a lean, virtual data integration solution that unlocks trapped data and delivers it as easily consumable, unified, and actionable information. It makes data spread across physically diverse systems — such as multiple databases, XML files, and Hadoop systems — appear as a set of tables in a local database. By providing following functionality, JDV enables agile data use:

  1. Connect: Access data from multiple, heterogeneous data sources.
  2. Compose: Easily combine and transform data into reusable, business-friendly virtual data models and views.
  3. Consume: Make unified data easily consumable through open standards interfaces.

It hides complexities, like the true locations of data or the mechanisms required to access or merge it. Data becomes easier for developers and users to work with.

This post will guide you step-by-step how to connect JDV to a MariaDB/MySQL database using Teiid Designer. We will connect to a MariaDB 10.1 server using MySQL Connector/J 5.1, a JDBC driver for communicating with MariaDB/MySQL servers. Indeed, you can follow this same tutorial with a MySQL instance.

Continue reading “Unlock your MariaDB/MySQL data with Red Hat JBoss Data Virtualization”

shadowman solo from external web 265x200

Looking for DevNation 2017 CFP

You may have seen (or maybe missed) that in 2017, DevNation will be folded into Red Hat Summit 2017.

The CFP deadline has been pushed back to December 16, so I look forward to seeing your submissions for application development!

Speakers:  submit your Application Development proposals today!
  1. Submit your proposal on the Summit CFP site [1] and tag it with the primary theme of Application Development.
  2. We’re interested in advanced technical topics of all developer-related topics, but especially looking for sessions on: Microservices, MicroProfile, Containers, .NET, modern coding practices, CI/CD, DevOps, cloud/OpenShift, Mobile, Eclipse / Che, IoT, Node.js / Javascript, Software Collections, C++, performance tools, etc.
  3. Got a developer topic that’s not listed in item 2?  Submit it.

[1] Submit your proposals at redhat.com/summit and check out the guide at http://redhat.slides.com/events/2017-red-hat-summit-submission-guide#/

 

shadowman solo from external web 265x200

Red Hat JBoss Data Virtualization on OpenShift: Part 1 – Getting started

Red Hat JBoss Data Virtualization (JDV) is a lean, virtual data integration solution that unlocks trapped data and delivers it as easily consumable, unified, and actionable information. JDV makes data spread across physically diverse systems such as multiple databases, XML files, and Hadoop systems appear as a set of tables in a local database.

When deployed on OpenShift, JDV enables:

  1. Service enabling your data
  2. Bringing data from outside to inside the PaaS
  3. Breaking up monolithic data sources virtually for a microservices architecture

Together with the JDV for OpenShift image, we have made available OpenShift templates that allow you to test and bootstrap JDV.

This article will demonstrate how to get started with JDV running on OpenShift. JDV is available as a containerized xPaaS image that is designed for use with OpenShift Enterprise 3.2 and later. We’ll be using the Red Hat Container Development Kit (CDK) to get started quickly.

The CDK provides a pre-built CDK based on Red Hat Enterprise Linux to help you develop container-based (sometimes called docker) applications quickly. The containers you build can be easily deployed on any Red Hat container host or platform, including: Red Hat Enterprise Linux, Red Hat Enterprise Linux Atomic Host, and our platform-as-a-service solution, OpenShift Enterprise 3.

Prerequisites

Continue reading “Red Hat JBoss Data Virtualization on OpenShift: Part 1 – Getting started”

RHEL_7_3x3_sticker_12236947_0514MM_preview

Configuring and Using Persistent Memory in RHEL 7.3

Persistent memory, or pmem, is an exciting new storage technology that combines the durability of storage with the low access latencies and high bandwidth of DRAM.  In this article, we’ll discuss the types of pmem hardware, a new programming model for pmem, and how to get access to pmem through the OS.

Persistent memory, sometimes called storage class memory, can be thought of as a cross between memory and storage. It shares a couple of properties with memory. First, it is byte addressable, meaning it can be accessed using CPU load and store instructions, as opposed to read() or write() system calls that are required for accessing traditional block-based storage. Second, pmem has the same order of magnitude performance as DRAM, meaning it has very low access latencies (measured in the tens to hundreds of nanoseconds). In addition to these beneficial memory-like properties, contents of persistent memory are preserved when the power is off, just as with storage. Taken together, these characteristics make persistent memory unique in the storage world.

Continue reading “Configuring and Using Persistent Memory in RHEL 7.3”