In Part 1, I described the FIS-based microservice using a relational database and replicated its functionality using Vert.x, MongoDB with Guice dependency injection. The RestVerticle, which implements the RESTful microservice calls the CustomerService which retrieves/saves customer info to the MongoDB directly. In this article, I am going to show you how you can convert the CustomerService to run as a separate verticle called CustomerVerticle and have the RestVerticle communicate with the CustomerVerticle via the Vert.x EventBus. For a simple application such as this, there is no advantage to code the application this way. I am doing this to show you how this can be done as more complex Vert.x applications usually have multiple verticles running and communicating over the EventBus. To enable verticles to communicate over the EventBus, there is usually some boiler code needed to send messages and listen to messages over the EventBus. I am going to show you how to create a service proxy using code generation to obviate the need to write the boiler plate code yourself and convert the application developed in Part 1 to use the service proxy using dependency injection. Continue reading RHOAR: Vert.x Microservices Toolkit Compared to Fuse Integration Services – Part 2
Red Hat recently released the Red Hat Openshift Application Runtimes (RHOAR) which includes a number of useful frameworks/toolkits including: Wildfly Swarm, Vert.x, Spring Boot, Netflix OSS and node.js (technology preview).
Among the RHOAR components, Vert.x is completely new to me. Vert.x is a polyglot toolkit for building reactive applications on the JVM. This means that, like node.js, it is event driven and non-blocking and can handle high concurrency using a small number of kernel threads. In other words, it can scale with minimal hardware. In this article, I am going to experiment with Vert.x and compare it to Fuse Integration Servcies (FIS) in building a cloud native microservice. So, please join me on my journey into the world of Vert.x and build a Vert.x application which uses dependency injection deployed on Openshift. Continue reading RHOAR: Vert.x Microservices Toolkit Compared to Fuse Integration Services – Part 1
In my previous article, I described how to setup a Distributed Replicated GlusterFS Volume as well as a simple Replicated Volume. I also described how to use the GlusterFS Native Client to access the volumes. In this article, I am going to show you how to setup NFS and Samba clients to access the GlusterFS volume and compare the performance of the different clients. Read the article in the December issue of the ODROID-Magazine here.
Here is part 1 of my docker swarm article published in the ODROID Magazine. ODROID Magazine is a free monthly e-zine featuring hardware and software articles related to the latest ARM and single board computer technology since January 2014. Although this tutorial is designed to run on the ODROID-C2 single board computer, all the commands that you learn apply equally well on INTEL-based machines running the docker engine. The commands are exactly the same. Once learnt and you can apply your docker command line knowledge to different environments including Linux, macOS, Windows and on a cloud host.
Part 1 discusses the docker swarm features.
Part 2 describes the ‘docker stack deploy’, you can think of it as Docker Compose for swarm mode, capabilites.
In a previous article, I deployed services in my ORDOID-C2 cluster using the Docker command line. It works but there must be a better way to do deployment especially when an application requires multiple components working together. Docker 1.13.x introduced the new docker stack deployment feature to allow deployment of a complete application stack to the swarm. A stack is a collection of services that make up an application. This new feature automatically deploys multiple services that are linked to each other obviating the need to define each one separately. In other words, docker-compose in swarm mode. To do this, I have to upgrade my Docker Engine from V1.12.6 that I installed using apt-get from the Ubuntu software repository to V1.13.x. Lucky for me, I’ve already built V1.13.1 on my ODROID-C2 myself months ago when I was experimenting unsuccessfully with swarm mode due to missing kernel modules in the OS I used at the time. See my previous article for more information on this. It is just a matter of upgrading all my ODROID-C2 nodes to V1.13.1 and I am in business. Continue reading Using Docker Stack Deploy on my ODROID-C2 Cluster
Around 6 months ago, I was experimenting with Docker swarm mode on my ODROID-C2 cluster consisting of 5 ODROID-C2s. Docker was running fine on a single machine using ‘docker run’ command but neither overlay network, routing mesh nor load balancing was working in the swarm mode. I tried using different versions of docker (1.12.x and 1.13.x) compiled on my ODROID-C2 to no avail as documented in a previous article. I also tried running Kubernetes on the ODROID-C2 cluster. Again I couldn’t get the networking part work. I suspected that the kernel was missing certain modules needed for Docker/Kubernetes networking. As the OSes at that time I tested were not stable enough, I stopped my experimentation on ARM-based machines. Instead I played with Openshift running on INTEL 64 bit machines as INTEL always has better support than ARM in any open source project. What rekindled my passion to get Docker swarm mode working is the hardware I have lying around not being used: an ODROID VU7 multi-touch screen and a VuShell for VU7. I wanted to make use of them to do some useful work. Continue reading Running Docker Swarm Mode on My Facelifted ODROID-C2 Cluster
In my previous article, I created a reusable FIS demo. Today, I am going to show you how to use the 3scale API Management Platform to create an API for the FIS ‘products’ service. By doing so, the service gains non-functional capabilities including access control, rate limits, analytics, activeDocs, billing & payment, etc. Starting my learning journey with 3scale, I decided to set up my own self-managed 3scale gateway and use an online swagger editor to create a swagger spec to document the service. During my journey, I ran into some gotchas, mysterious behaviours, and problems that were eventually overcome. Overall, it was a good learning experience although frustrating at times. I am sharing my learning journey with you pictorially in this article. Hope you’ll find it useful. Continue reading Exposing FIS-based Service as a 3scale API – My Learning Journey Part 1
Fuse Integration Services 2.0 (FIS 2.0) for Openshift introduces a number of new features. In my opinion, the most exciting ones are the introduction of S2I binary workflow and Spring Boot support. We shall be using these 2 new features in this article. As FIS is for Openshift, as its name implies, one needs a development Openshift environment to experiment with it. There are several ways to set up a development Openshift environment on your laptop. The following are the most popular options:
oc cluster up – this is a relatively new feature introduced in Openshift Origin 1.3. This option uses a containerized version of Openshift and runs it locally on your laptop. It requires Docker to run.
Red Hat Development Suite (RHDS) – this development suite comes with an installer (Windows and Mac only at present) that installs JBoss Developer Studio, Red Hat Container Development Kit and all the necessary dependencies. It is based on Vagrant, VirtualBox. Openshift Enterprise and Red Hat Enterprise Linux. In contrast to option 1 which is based on Docker, RHDS is based on a virtual machine or VM.