swarm-service Overview

RHOAR: Wildfly Swarm vs Spring Boot Microservices – Part 1

Introduction

Red Hat Openshift Application Runtime (RHOAR) comes with a number of frameworks/toolkits for implementing microservices. In previous articles on Vert.x (Part1 and Part2), I compared Vert.x with Fuse Integration Services (FIS). I am going to compare two other popular frameworks that come with RHOAR in this article. They are: Wildfly Swarm and Spring Boot. I am going to show you how to implement the same database access application implemented in Vert.x and FIS in my two previous articles so that you can compare the level of difficulty for using these frameworks. This is kind of an unfair comparison as most of you are either JEE or Spring developers and you will always find that your framework is easier to use than others especially when compared to Vert.x as it requires learning a new way (reactive programming) of implementing an application.

Wildfly Swarm for JEE Developers

If you are coming from a JEE background, you will feel right at home using Wildfly Swarm as it is a configurable JEE app server. You are going to use the same JEE technology to implement your application. Unlike standard JEE app servers such as Wildfly App Server and Red Hat Enterprise Application Platform (EAP) which deploy and run war or ear files, Wildfly Swarm, like Spring Boot, allows you to package your application as an Uber Jar, a self-contained, executable Java archive which can be run using the command: java -jar yourUberJar.jar

Fractions

Wildfly Swarm is based on the Wildfly App server. It allows you to construct just enough app server to run your application meaning that you only include the runtime dependencies just enough to run your application resulting in a the smallest footprint possible. Wildfly Swarm is for implementing microservice use cases and not recommended for applications with a user interface.

A fraction is a unit providing a specific piece of functionality. Examples of fractions include jaxrs-jaxb, jaxrs-jsonp for build RESTful services, jpa for data persistence, jaxrs-cdi for dependency injection, etc. There are current over 180 fractions available. The recommended way to bring in the fractions is to include them as maven dependencies expressed as Maven GAV coordinates:

org.wildfly.swarm:<fraction>:<version> using the <groupId>, <artifactId> and <version> elements respectively. In our pom.xml file, version is not needed as it has been defined in the dependencyManagement section’s bill-of-material (bom) pom.

...

<dependency>

  <groupId>org.wildfly.swarm</groupId>

  <artifactId>jpa</artifactId>

</dependency>

<dependency>

  <groupId>org.wildfly.swarm</groupId>

  <artifactId>cdi</artifactId>

</dependency>

...

Refer to the pom.xml file in the project for details.

The source code is available on Github.

Configuration

Wildfly Swarm provides different mechanisms for configuring fractions which include: system properties, Java API, XML, command-line, and environment-specific project stages. We are going to use the latter to configure a “local” configuration ie, running Wildfly Swarm locally on the machine. We do this by adding the yaml file: project-local.yml in the resources directory. The content is shown below:

swarm:

  datasources:

    data-sources:

      CustomerDS:

        driver-name: h2

        connection-url: jdbc:h2:mem:test;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE

        user-name: sa

        password: sa

The configuration specifies the Datasource which is bound to java:jboss/datasources/CustomerDS with the driver name (we are using the h2 in-memory database when running the application locally on the machine) and user/password specified in connection-url, user-name and password respectively.

Entity

The application we are building is CustomerRestApplication. All it does is to provide a RESTful interface, which uses the CustomerService to retrieve a single customer’s info, all customers’ info or add a customer with relevant info.

The application uses JPA for persistence. The entity is defined by the Customer.java class. Note the use of annotations: @Entity, @Table and @Id.

package com.redhat.rhoar.swarm.customer.model;

import javax.persistence.Entity;

import javax.persistence.Id;

import javax.persistence.Table;

@Entity

@Table(name = "CUSTOMER")

public class Customer {

  @Id

  private String customerId;

  private String vipStatus;

  private Integer balance;

...

For testing, the test-persistence,xml and test-load.sql are used to create the Customer table and insert 2 rows in the table using the test-load.sql script which is specified on the test-persistence.xml’s javax.persistence.sql-load-script-source property.

CustomerService

The operations: getCustomer, getCustomers and addCustomer have been implemented in CustomerService.java. Note the use of annotations:
@ApplicationScoped, @PersistenceContext and @Resource.

@ApplicationScoped

public class CustomerService {

  @PersistenceContext(unitName = "customer", type = PersistenceContextType.EXTENDED)

  EntityManager em;

  @Resource

  private UserTransaction userTransaction;

  public Customer getCustomer(String customerId) {

    Customer customer = em.find(Customer.class, customerId);

    return customer;

  }

  public List getCustomers() {

    return em.createQuery("select c from Customer c").getResultList();

  }

  public void addCustomer(Customer customer) throws Exception {

    userTransaction.begin();

    em.persist(customer);

    userTransaction.commit();

  }

}

RESTful Interfaces

The CustomerService operations are exposed using a RESTful interface using the CustomerEndpoint.java class. The CustomerService is injected using the @Inject annotation. Note also the use of the @GET, @POST for specific operations as well as @Path, @Produces/@Consumes to specify the URL path and the media type respectively.

@Path("/")

@RequestScoped

public class CustomerEndpoint {

  @Inject

  private CustomerService customerService;

  @GET

  @Path("/customer/{customerId}")

  @Produces(MediaType.APPLICATION_JSON)

  public Customer getCustomer(@PathParam("customerId") String itemId) {

    Customer customer =   customerService.getCustomer(itemId);

    if (customer == null) {

      throw new NotFoundException();

    } else {

      return customer;

    }

  }

Health Check

For monitoring the health of the service, we provide the check method in the HealthCheckEndpoint.java class which uses Wildfly Swarm’s monitor fraction to do the work.

@Path("/")

public class HealthCheckEndpoint {

 

  @GET

  @Health

  @Path("/status")

  public HealthStatus check() {

    return HealthStatus.named("server-state").up();

  }

 

}

Note the use of annotations: @Path, @GET, @Health. The check method is going to be used for both readiness and liveness probes when deployed to Openshift.

Testing using Arquillian

Arquillian is an integration and functional testing platform that can be used for testing our CustomerRestApplication microservice. In other words, it is a test-harness that launches application containers (web containers, not Docker containers) and executes test code both from outside and within the running application. To use Arquillian, we have to bring in the Arquillian fraction.

The RestApiTest.java deploys the microservice in a Wildfly Swarm containers and invokes its REST API as a HTTP client.

To use Arquillian to do that, you have to annotate the Junit test with @RunWith(Arquillian.class) and define the application deployment with @Deployment as in:

@RunWith(Arquillian.class)

public class RestApiTest {

…

@Deployment

public static Archive<?> createDeployment() {

  totalSW.start();

  return ShrinkWrap.create(WebArchive.class)

    .addPackages(true, CustomerRestApplication.class.getPackage())

    .addPackages(true, StopWatch.class.getPackage())

    .addAsResource("project-local.yml", "project-local.yml")

    .addAsResource("META-INF/test-persistence.xml", "META-INF/persistence.xml")

    .addAsResource("META-INF/test-load.sql", "META-INF/test-load.sql");

  }

The method createDeployment returns a ShrinkWrap archive for deployment.

We also need to create a Wildfly Swarm container using @CreateSwarm as in:

@CreateSwarm

public static Swarm newContainer() throws Exception {

  Properties properties = new Properties();

  properties.put("swarm.http.port", port);

  return new Swarm(properties).withProfile("local");

}

The JUnit tests are annotated with @Test and @RunAsClient to invoke the microservice operations running in a Wildfly Swarm container.

I am finding the test runs quite slowly. To find out where it it spending its time, I use Apache Stopwatch to time the operations. Here is the result:

Arquillian Overhead
Arquillian Overhead

It is spending 91% (30 seconds) of the total time setting up the environment and only 9% (3 seconds) in executing the tests. This means we should pack as many tests in a Junit test module as possible to minimise the overhead. I am yet to find a way to make the tests run faster. It does require extra steps in shrink wrapping the application deployment and creating a Wildfly Swarm containers (steps not present in other Junit testing) but this does not explain why it is taking so long to setup and run the tests!

There is another Junit test: CustomerServiceTest.java whose tests are executed entirely within the Wildfly Swarm container. Hence, you won’t find the annotation @RunAsClient after the @Test annotation like before. Have a read and contrast that with RespApiTest.java.

Openshift Deployment

My Openshift environment is “oc cluster up” running on a virtual machine. From the virtual machine’s command prompt, issue the following commands:

# login to Openshift as developer

oc login -U developers

export SWARM_PRJ=swarm-customer

# create an Openshift project

oc new-project $SWARM_PRJ

# create a postgresql database

oc process -f etc/customer-service-postgresql-persistence.yml \

-p CUSTOMER_DB_USERNAME=jboss -p CUSTOMER_DB_PASSWORD=jboss \

-p CUSTOMER_DB_NAME=customerdb | oc create -f - -n $SWARM_PRJ

# create a config map

oc create configmap app-config --from-file=src/main/resources/project-defaults.yml -n $SWARM_PRJ

# deploy swarm app to Openshift

mvn clean fabric8:deploy -DskipTests=true -Popenshift -Dfabric8.namespace=$SWARM_PRJ

The etc/customer-service-postgresql-persistence.yml template defines how the postgresql database is to be deployed: ephemeral using an Openshift post lifecycle hook (recreateParams) to create a table and populate 2 rows of data in the database.

strategy:

  recreateParams:

    post:

      execNewPod:

        command:

        - /bin/sh

        - -i

        - -c

        - sleep 10 && PGPASSWORD=$POSTGRESQL_PASSWORD psql -h $CUSTOMER_POSTGRESQL_SERVICE_HOST -U $POSTGRESQL_USER -q -d $POSTGRESQL_DATABASE -c "$POSTGRESQL_INIT"

containerName: ${APPLICATION_NAME}

        env:

        - name: POSTGRESQL_INIT

        value: CREATE TABLE customer (customerId character varying(255)

NOT NULL,vipStatus character varying(255),

balance integer NOT NULL);

ALTER TABLE customer OWNER TO jboss;ALTER TABLE ONLY customer

ADD CONSTRAINT customer_pkey PRIMARY KEY (customerId);

INSERT into customer (customerId, vipStatus, balance)

values ('A01', 'Diamond', 1000);

INSERT into customer (customerId, vipStatus, balance)

       values ('A02', 'Gold', 512);

      failurePolicy: ignore

      timeoutSeconds: 600

ConfigMaps allow you to decouple configuration artifacts from the Openshift Docker image content to keep containerized applications portable. The src/main/resources/project-defaults.yml file is used to mount the ConfigMap to a well-known directory in the container. The WildFly Swarm start-up command refers to the mounted properties file.

The src/main/fabric8/deployment.yml exposes the health endpoint for both readiness and liveness checks as well as defining the volume that references the configmap.

After setting up the database and deploying the app, your project overview should look like:

swarm-service Overview
swarm-service Overview

Calling the Microservice

Curl commands are used to interact with the microservice. To use the curl command, we need to know the route of the microservice which can be found from the project’s Applications→route menu from the Openshift console.

Getting the Route from Openshift Console
Getting the Route from Openshift Console

These commands and their output are shown below:

export SWARM_URL=http://swarm-service-swarm-customer.10.0.2.15.xip.io

curl -X GET "$SWARM_URL/health"

curl -X GET "$SWARM_URL/customer/A01"

curl -i -H "Content-Type: application/json" \

-X POST -d '{"customerId":"A03","vipStatus":"Platinum","balance":2200}' \

"$SWARM_URL/customer"

The responses from the microservice are shown below:

curl Responses
curl Responses

What Next?

Now we’ve seen how a Wildfly Swarm microservice project looks like, in the next installment I shall show you how the same application developed using Spring Boot looks like. I shall then compare the 2 implementation and assess the pros and cons of using each technology. Stay tuned!