The microprofile-fault-tolerance
quickstart demonstrates how to use Eclipse MicroProfile Fault Tolerance in WildFly.
One of the challenges brought by the distributed nature of microservices is that communication with external systems is inherently unreliable. This increases demand on resiliency of applications. To simplify making more resilient applications, WildFly contains an implementation of the MicroProfile Fault Tolerance specification.
In this guide, we demonstrate usage of MicroProfile Fault Tolerance annotations such as @Timeout
, @Fallback
,
@Retry
and @CircuitBreaker
. The specification also introduces @Bulkhead
and @Asynchronous
interceptor bindings not
covered in this guide.
The application built in this guide simulates a simple backend for a gourmet coffee on-line store. It implements a REST endpoint providing information about coffee samples we have in store.
Let’s imagine, although it’s not implemented as such, that some methods in our endpoint require communication to external services like a database or an external microservice, which introduces a factor of unreliability. This is simulated in our code by intentionally throwing exceptions with certain probability. Then we use the MicroProfile Fault Tolerance annotations to overcome these failures.
We recommend that you follow the instructions that create the application from scratch. However, you can also deploy the completed example which is available in this directory.
The application this project produces is designed to be run on WildFly Application Server 35 or later.
All you need to build this project is Java SE 17.0 or later, and Maven 3.6.0 or later. See Configure Maven to Build and Deploy the Quickstarts to make sure you are configured correctly for testing the quickstarts.
In the following instructions, replace WILDFLY_HOME
with the actual path to your WildFly installation. The installation path is described in detail here: Use of WILDFLY_HOME and JBOSS_HOME Variables.
When you see the replaceable variable QUICKSTART_HOME, replace it with the path to the root directory of all of the quickstarts.
-
Open a terminal and navigate to the root of the WildFly directory.
-
Start the WildFly server with the MicroProfile profile by typing the following command.
$ WILDFLY_HOME/bin/standalone.sh -c standalone-microprofile.xml
NoteFor Windows, use the WILDFLY_HOME\bin\standalone.bat
script.
In this section we will go through the steps to create a new Jakarta REST deployment from scratch and then make it more resilient by using MicroProfile Fault Tolerance annotations.
First, we need to generate a maven project. Open a terminal and create an empty maven project with following command:
mvn archetype:generate \
-DgroupId=org.wildfly.quickstarts.microprofile.faulttolerance \
-DartifactId=microprofile-fault-tolerance \
-DarchetypeGroupId=org.apache.maven.archetypes \
-DarchetypeArtifactId=maven-archetype-webapp \
-DinteractiveMode=false
cd microprofile-fault-tolerance
Now, open the project in your favorite IDE.
Next the project’s pom.xml
should be updated so the dependencies required by this quickstart are
available and so we have a plug-in installed which can deploy the quickstart directly to WildFly.
Add the following properties to the pom.xml
:
<version.bom.expansion>{versionExpansionBom}</version.bom.expansion>
<version.bom.ee>{versionServerBom}</version.bom.ee>
Also the project can be updated to use Java 17 as the minimum:
<maven.compiler.release>17</maven.compiler.release>
Before the dependencies are defined add the following boms:
<dependencyManagement>
<dependencies>
<!-- importing the ee-with-tools BOM adds specs and other useful artifacts as managed dependencies -->
<dependency>
<groupId>org.wildfly.bom</groupId>
<artifactId>wildfly-ee-with-tools</artifactId>
<version>{versionServerBom}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
<!-- importing the Expansion BOM adds MicroProfile specs -->
<dependency>
<groupId>org.wildfly.bom</groupId>
<artifactId>wildfly-expansion</artifactId>
<version>{versionExpansionBom}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
By using boms the majority of dependencies used within this quickstart align with the version uses by the application server.
The following dependencies can now be added to the project.
<dependencies>
<dependency>
<groupId>org.eclipse.microprofile.fault-tolerance</groupId>
<artifactId>microprofile-fault-tolerance-api</artifactId>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>jakarta.enterprise</groupId>
<artifactId>jakarta.enterprise.cdi-api</artifactId>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.jboss.resteasy</groupId>
<artifactId>resteasy-jaxrs</artifactId>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.jboss.logging</groupId>
<artifactId>jboss-logging</artifactId>
<scope>provided</scope>
</dependency>
</dependencies>
Note that all dependencies have the scope provided
.
As we are going to be deploying this application to the WildFly server, let’s also add a maven plugin that will simplify working with the application server. Add the following section under configuration:
<build>
<plugins>
...
<plugin>
<groupId>org.wildfly.plugins</groupId>
<artifactId>wildfly-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
Setup the required Maven repositories (if you don’t have them set up in Maven global settings):
<repositories>
<repository>
<id>jboss-public-maven-repository</id>
<name>JBoss Public Maven Repository</name>
<url>https://repository.jboss.org/nexus/content/groups/public</url>
<layout>default</layout>
<releases>
<enabled>true</enabled>
<updatePolicy>never</updatePolicy>
</releases>
<snapshots>
<enabled>true</enabled>
<updatePolicy>never</updatePolicy>
</snapshots>
</repository>
<repository>
<id>redhat-ga-maven-repository</id>
<name>Red Hat GA Maven Repository</name>
<url>https://maven.repository.redhat.com/ga/</url>
<layout>default</layout>
<releases>
<enabled>true</enabled>
<updatePolicy>never</updatePolicy>
</releases>
<snapshots>
<enabled>true</enabled>
<updatePolicy>never</updatePolicy>
</snapshots>
</repository>
</repositories>
<pluginRepositories>
<pluginRepository>
<id>jboss-public-maven-repository</id>
<name>JBoss Public Maven Repository</name>
<url>https://repository.jboss.org/nexus/content/groups/public</url>
<releases>
<enabled>true</enabled>
</releases>
<snapshots>
<enabled>true</enabled>
</snapshots>
</pluginRepository>
<pluginRepository>
<id>redhat-ga-maven-repository</id>
<name>Red Hat GA Maven Repository</name>
<url>https://maven.repository.redhat.com/ga/</url>
<releases>
<enabled>true</enabled>
</releases>
<snapshots>
<enabled>true</enabled>
</snapshots>
</pluginRepository>
</pluginRepositories>
Now we are ready to start developing an application with MicroProfile Fault Tolerance capabilities.
In this section we create a skeleton of our application, so that we have something that we can extend and to which we can add fault tolerance features later on.
First, create a simple entity representing a coffee sample in our store:
package org.wildfly.quickstarts.microprofile.faulttolerance;
public class Coffee {
public Integer id;
public String name;
public String countryOfOrigin;
public Integer price;
public Coffee() {
}
public Coffee(Integer id, String name, String countryOfOrigin, Integer price) {
this.id = id;
this.name = name;
this.countryOfOrigin = countryOfOrigin;
this.price = price;
}
}
Now, lets expose our Jakarta REST application at the context path:
package org.wildfly.quickstarts.microprofile.faulttolerance;
import jakarta.ws.rs.ApplicationPath;
import jakarta.ws.rs.core.Application;
@ApplicationPath("/")
public class CoffeeApplication extends Application {
}
Let’s continue with a simple CDI bean, that would work as a repository of our coffee samples.
package org.wildfly.quickstarts.microprofile.faulttolerance;
import java.util.ArrayList;
import java.util.Collections;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.stream.Collectors;
import jakarta.enterprise.context.ApplicationScoped;
@ApplicationScoped
public class CoffeeRepositoryService {
private Map<Integer, Coffee> coffeeList = new HashMap<>();
public CoffeeRepositoryService() {
coffeeList.put(1, new Coffee(1, "Fernandez Espresso", "Colombia", 23));
coffeeList.put(2, new Coffee(2, "La Scala Whole Beans", "Bolivia", 18));
coffeeList.put(3, new Coffee(3, "Dak Lak Filter", "Vietnam", 25));
}
public List<Coffee> getAllCoffees() {
return new ArrayList<>(coffeeList.values());
}
public Coffee getCoffeeById(Integer id) {
return coffeeList.get(id);
}
public List<Coffee> getRecommendations(Integer id) {
if (id == null) {
return Collections.emptyList();
}
return coffeeList.values().stream()
.filter(coffee -> !id.equals(coffee.id))
.limit(2)
.collect(Collectors.toList());
}
}
Finally, create the org.wildfly.quickstarts.microprofile.faulttolerance.CoffeeResource
class as follows:
package org.wildfly.quickstarts.microprofile.faulttolerance;
import java.util.List;
import java.util.Random;
import java.util.concurrent.atomic.AtomicLong;
import jakarta.inject.Inject;
import jakarta.ws.rs.GET;
import jakarta.ws.rs.Path;
import jakarta.ws.rs.Produces;
import jakarta.ws.rs.core.MediaType;
import org.jboss.logging.Logger;
@Path("/coffee")
@Produces(MediaType.APPLICATION_JSON)
public class CoffeeResource {
private static final Logger LOGGER = Logger.getLogger(CoffeeResource.class);
@Inject
private CoffeeRepositoryService coffeeRepository;
private AtomicLong counter = new AtomicLong(0);
@GET
public List<Coffee> coffees() {
final Long invocationNumber = counter.getAndIncrement();
maybeFail(String.format("CoffeeResource#coffees() invocation #%d failed", invocationNumber));
LOGGER.infof("CoffeeResource#coffees() invocation #%d returning successfully", invocationNumber);
return coffeeRepository.getAllCoffees();
}
private void maybeFail(String failureLogMessage) {
if (new Random().nextBoolean()) {
LOGGER.error(failureLogMessage);
throw new RuntimeException("Resource failure.");
}
}
}
At this point, we expose a single REST method that will show a list of coffee samples in a JSON format. Note
that we introduced some fault making code in our CoffeeResource#maybeFail()
method, which is going to cause failures
in the CoffeeResource#coffees()
endpoint method in about 50% of requests.
Let’s check that our application works!
-
Make sure the WildFly server is started as described above.
-
Open new terminal and navigate to the root directory of your project.
-
Type the following command to build and deploy the project:
mvn clean package wildfly:deploy
Then, open http://localhost:8080/microprofile-fault-tolerance/coffee
in your browser and make a couple of requests.
Some requests should show us the list of our coffee samples in JSON, the rest will fail
with a RuntimeException
thrown in CoffeeResource#maybeFail()
.
Let the WildFly server running and in your IDE add the @Retry
annotation to the CoffeeResource#coffees()
method as follows and save the file:
import org.eclipse.microprofile.faulttolerance.Retry;
...
public class CoffeeResource {
...
@GET
@Retry(maxRetries = 4)
public List<Coffee> coffees() {
...
}
...
}
Rebuild and redeploy the application in WildFly server:
mvn wildfly:deploy
You can reload the page couple more times. Practically all requests should now be succeeding. The CoffeeResource#coffees()
method is still in fact failing in about 50% of cases, but every time it happens the platform automatically retries
the call!
To see that the failures still happen, check the output of the development server. The log messages should be similar to these:
18:29:20,901 ERROR [org.wildfly.quickstarts.microprofile.faulttolerance.CoffeeResource] (default task-3) CoffeeResource#coffees() invocation #0 failed
18:29:20,901 INFO [org.wildfly.quickstarts.microprofile.faulttolerance.CoffeeResource] (default task-3) CoffeeResource#coffees() invocation #1 returning successfully
18:29:21,315 ERROR [org.wildfly.quickstarts.microprofile.faulttolerance.CoffeeResource] (default task-3) CoffeeResource#coffees() invocation #0 failed
18:29:21,337 ERROR [org.wildfly.quickstarts.microprofile.faulttolerance.CoffeeResource] (default task-3) CoffeeResource#coffees() invocation #1 failed
18:29:21,502 ERROR [org.wildfly.quickstarts.microprofile.faulttolerance.CoffeeResource] (default task-3) CoffeeResource#coffees() invocation #2 failed
18:29:21,654 INFO [org.wildfly.quickstarts.microprofile.faulttolerance.CoffeeResource] (default task-3) CoffeeResource#coffees() invocation #3 returning successfully
You can see that every time an invocation fails, it’s immediately followed by another invocation, until one succeeds. Since we allowed 4 retries, it would require 5 invocations to fail in a row, in order for the user to be actually exposed to a failure. That is fairly unlikely to happen.
So what else have we got in MicroProfile Fault Tolerance? Let’s look into timeouts.
Add following two methods to our CoffeeResource
endpoint and deploy onto the running server.
import org.jboss.resteasy.annotations.jaxrs.PathParam;
import org.eclipse.microprofile.faulttolerance.Timeout;
...
public class CoffeeResource {
...
@GET
@Path("/{id}/recommendations")
@Timeout(250)
public List<Coffee> recommendations(@PathParam("id") int id) {
long started = System.currentTimeMillis();
final long invocationNumber = counter.getAndIncrement();
try {
randomDelay();
LOGGER.infof("CoffeeResource#recommendations() invocation #%d returning successfully", invocationNumber);
return coffeeRepository.getRecommendations(id);
} catch (InterruptedException e) {
LOGGER.errorf("CoffeeResource#recommendations() invocation #%d timed out after %d ms",
invocationNumber, System.currentTimeMillis() - started);
return null;
}
}
private void randomDelay() throws InterruptedException {
Thread.sleep(new Random().nextInt(500));
}
}
Rebuild and redeploy the application:
mvn wildfly:deploy
We added some new functionality. We want to be able to recommend some related coffees based on a coffee that a user is currently looking at. It’s not a critical functionality, it’s a nice-to-have. When the system is overloaded, and the logic behind obtaining recommendations takes too long to execute, we would rather time out and render the UI without recommendations.
Note that the timeout was configured to 250 ms, and a random artificial delay between 0 and 500 ms was introduced
into the CoffeeResource#recommendations()
method.
In your browser, go to http://localhost:8080/microprofile-fault-tolerance/coffee/2/recommendations
and hit reload a couple of times.
You should see some requests time out with org.eclipse.microprofile.faulttolerance.exceptions.TimeoutException
.
Requests that do not time out should show two recommended coffee samples in JSON.
Let’s improve the recommendations feature by providing a fallback functionality for the case when a timeout happens.
Add a fallback method to CoffeeResource
and a @Fallback
annotation to CoffeeResource#recommendations()
method
as follows:
import java.util.Collections;
import org.eclipse.microprofile.faulttolerance.Fallback;
...
public class CoffeeResource {
...
@Fallback(fallbackMethod = "fallbackRecommendations")
public List<Coffee> recommendations(@PathParam("id") int id) {
...
}
public List<Coffee> fallbackRecommendations(int id) {
LOGGER.info("Falling back to RecommendationResource#fallbackRecommendations()");
// safe bet, return something that everybody likes
return Collections.singletonList(coffeeRepository.getCoffeeById(1));
}
...
}
Rebuild and redeploy the application.
Hit reload several times on http://localhost:8080/microprofile-fault-tolerance/coffee/2/recommendations
.
The TimeoutException
should not appear anymore. Instead, in case of a timeout, the page will
display a single recommendation that we hardcoded in our fallback method fallbackRecommendations()
, rather than
two recommendations returned by the original method.
Check the server output to see that fallback is really happening:
18:36:01,873 INFO [org.wildfly.quickstarts.microprofile.faulttolerance.CoffeeResource] (default task-3) CoffeeResource#recommendations() invocation #0 returning successfully
18:36:02,705 ERROR [org.wildfly.quickstarts.microprofile.faulttolerance.CoffeeResource] (default task-3) CoffeeResource#recommendations() invocation #0 timed out after 253 ms
18:36:02,706 INFO [org.wildfly.quickstarts.microprofile.faulttolerance.CoffeeResource] (default task-3) Falling back to RecommendationResource#fallbackRecommendations()
Note
|
The fallback method is required to have the same parameters as the original method. |
A circuit breaker is useful for limiting number of failures happening in the system, when part of the system becomes temporarily unstable. The circuit breaker records successful and failed invocations of a method, and when the ratio of failed invocations reaches the specified threshold, the circuit breaker opens and blocks all further invocations of that method for a given time.
Add the following code into the CoffeeRepositoryService
bean, so that we can demonstrate a circuit breaker in action:
import java.util.concurrent.atomic.AtomicLong;
import org.eclipse.microprofile.faulttolerance.CircuitBreaker;
...
public class CoffeeRepositoryService {
...
private AtomicLong counter = new AtomicLong(0);
@CircuitBreaker(requestVolumeThreshold = 4)
public Integer getAvailability(Coffee coffee) {
maybeFail();
return new Random().nextInt(30);
}
private void maybeFail() {
// introduce some artificial failures
final Long invocationNumber = counter.getAndIncrement();
if (invocationNumber % 4 > 1) { // alternate 2 successful and 2 failing invocations
throw new RuntimeException("Service failed.");
}
}
}
and inject the code below into the CoffeeResource
endpoint:
public class CoffeeResource {
...
@Path("/{id}/availability")
@GET
public Response availability(@PathParam("id") int id) {
final Long invocationNumber = counter.getAndIncrement();
Coffee coffee = coffeeRepository.getCoffeeById(id);
// check that coffee with given id exists, return 404 if not
if (coffee == null) {
return Response.status(Response.Status.NOT_FOUND).build();
}
try {
Integer availability = coffeeRepository.getAvailability(coffee);
LOGGER.infof("CoffeeResource#availability() invocation #%d returning successfully", invocationNumber);
return Response.ok(availability).build();
} catch (RuntimeException e) {
String message = e.getClass().getSimpleName() + ": " + e.getMessage();
LOGGER.errorf("CoffeeResource#availability() invocation #%d failed: %s", invocationNumber, message);
return Response.status(Response.Status.INTERNAL_SERVER_ERROR)
.entity(message)
.type(MediaType.TEXT_PLAIN_TYPE)
.build();
}
}
...
}
Rebuild and redeploy the application.
We added another functionality - the application can return the amount of remaining packages of given coffee on our store (just a random number).
This time an artificial failure was introduced in the CDI bean: the CoffeeRepositoryService#getAvailability()
method is
going to alternate between two successful and two failed invocations.
We also added a @CircuitBreaker
annotation with requestVolumeThreshold = 4
. CircuitBreaker.failureRatio
is
by default 0.5, and CircuitBreaker.delay
is by default 5 seconds. That means that a circuit breaker will open
when 2 of the last 4 invocations failed. It will stay open for 5 seconds.
To test this out, do the following:
-
Go to
http://localhost:8080/microprofile-fault-tolerance/coffee/2/availability
in your browser. You should see a number being returned. -
Hit reload, this second request should again be successful and return a number.
-
Reload two more times. Both times you should see text "RuntimeException: Service failed.", which is the exception thrown by
CoffeeRepositoryService#getAvailability()
. -
Reload a couple more times. Unless you waited too long, you should again see exception, but this time it’s "CircuitBreakerOpenException: getAvailability". This exception indicates that the circuit breaker opened, and the
CoffeeRepositoryService#getAvailability()
method is not being called anymore. -
Give it 5 seconds during which circuit breaker should close. You should be able to make two successful requests again.
This section shows how to work with the complete quickstart.
-
Make sure WildFly server is started.
-
Open a terminal and navigate to the root directory of this quickstart.
-
Type the following command to build the quickstart.
$ mvn clean package
-
Type the following command to deploy the quickstart.
$ mvn wildfly:deploy
This deploys the microprofile-fault-tolerance/target/microprofile-fault-tolerance.war
to the running instance of the server.
You should see a message in the server log indicating that the archive deployed successfully.
You can visit following URLs in your browser:
-
http://localhost:8080/microprofile-fault-tolerance/coffee
This URL should almost always return a response with JSON data. Even though internally the endpoint fails in about 50% of cases (you can see those errors in the server log), it uses automatic retries to mitigate these failures.
See details in the section Adding Resiliency: Retries.
-
http://localhost:8080/microprofile-fault-tolerance/coffee/1/recommendations
This endpoint also fails in about 50% of cases, but it uses fallback strategy to provide an alternative response if normal processing failed. As a result the response will either contain a JSON list with several items in it (a successful response) or just a single item in it (a fallback response).
See details in sections Adding Resiliency: Timeouts and Adding Resiliency: Fallbacks.
-
http://localhost:8080/microprofile-fault-tolerance/coffee/1/availability
This endpoint demonstrates the use of a circuit breaker. It returns a sequence of two successful responses, followed by a sequence of two failed ones. After that, the circuit breaker will open and will block all further request processing for the duration of 5 seconds.
See details in the section Adding Resiliency: Circuit Breakers.
This quickstart includes integration tests, which are located under the src/test/
directory. The integration tests verify that the quickstart runs correctly when deployed on the server.
Follow these steps to run the integration tests.
-
Make sure WildFly server is started.
-
Make sure the quickstart is deployed.
-
Type the following command to run the
verify
goal with theintegration-testing
profile activated.$ mvn verify -Pintegration-testing
You can use the WildFly Maven Plugin to build a WildFly bootable JAR to run this quickstart.
The quickstart pom.xml
file contains a Maven profile named bootable-jar, which activates the bootable JAR packaging when provisioning WildFly, through the <bootable-jar>true</bootable-jar>
configuration element:
<profile>
<id>bootable-jar</id>
<activation>
<activeByDefault>true</activeByDefault>
</activation>
<build>
<plugins>
<plugin>
<groupId>org.wildfly.plugins</groupId>
<artifactId>wildfly-maven-plugin</artifactId>
<configuration>
<discover-provisioning-info>
<version>${version.server}</version>
</discover-provisioning-info>
<bootable-jar>true</bootable-jar>
<add-ons>...</add-ons>
</configuration>
<executions>
<execution>
<goals>
<goal>package</goal>
</goals>
</execution>
</executions>
</plugin>
...
</plugins>
</build>
</profile>
The bootable-jar profile is activate by default, and when built the WildFly bootable jar file is named microprofile-fault-tolerance-bootable.jar
, and may be found in the target
directory.
-
Ensure the bootable jar is built.
$ mvn clean clean package
-
Start the WildFly bootable jar use the WildFly Maven Plugin
start-jar
goal.$ mvn wildfly:start-jar
NoteYou may also start the bootable jar without Maven, using the
java
command.$ java -jar target/microprofile-fault-tolerance-bootable.jar
-
Run the integration tests use the
verify
goal, with theintegration-testing
profile activated.$ mvn verify -Pintegration-testing
-
Shut down the WildFly bootable jar use the WildFly Maven Plugin
shutdown
goal.$ mvn wildfly:shutdown
On OpenShift, the S2I build with Apache Maven uses an openshift
Maven profile to provision a WildFly server, deploy and run the quickstart in OpenShift environment.
The server provisioning functionality is provided by the WildFly Maven Plugin, and you may find its configuration in the quickstart pom.xml
:
<profile>
<id>openshift</id>
<build>
<plugins>
<plugin>
<groupId>org.wildfly.plugins</groupId>
<artifactId>wildfly-maven-plugin</artifactId>
<configuration>
<discover-provisioning-info>
<version>${version.server}</version>
<context>cloud</context>
</discover-provisioning-info>
<add-ons>...</add-ons>
</configuration>
<executions>
<execution>
<goals>
<goal>package</goal>
</goals>
</execution>
</executions>
</plugin>
...
</plugins>
</build>
</profile>
You may note that unlike the provisioned-server
profile it uses the cloud context which enables a configuration tuned for OpenShift environment.
The plugin uses WildFly Glow to discover the feature packs and layers required to run the application, and provisions a server containing those layers.
If you get an error or the server is missing some functionality which cannot be auto-discovered, you can download the WildFly Glow CLI and run the following command to see more information about what add-ons are available:
wildfly-glow show-add-ons
This section contains the basic instructions to build and deploy this quickstart to WildFly for OpenShift or WildFly for OpenShift Online using Helm Charts.
-
You must be logged in OpenShift and have an
oc
client to connect to OpenShift -
Helm must be installed to deploy the backend on OpenShift.
Once you have installed Helm, you need to add the repository that provides Helm Charts for WildFly.
$ helm repo add wildfly https://docs.wildfly.org/wildfly-charts/
"wildfly" has been added to your repositories
$ helm search repo wildfly
NAME CHART VERSION APP VERSION DESCRIPTION
wildfly/wildfly ... ... Build and Deploy WildFly applications on OpenShift
wildfly/wildfly-common ... ... A library chart for WildFly-based applications
Log in to your OpenShift instance using the oc login
command.
The backend will be built and deployed on OpenShift with a Helm Chart for WildFly.
Navigate to the root directory of this quickstart and run the following command:
$ helm install microprofile-fault-tolerance -f charts/helm.yaml wildfly/wildfly --wait --timeout=10m0s
NAME: microprofile-fault-tolerance
...
STATUS: deployed
REVISION: 1
This command will return once the application has successfully deployed. In case of a timeout, you can check the status of the application with the following command in another terminal:
oc get deployment microprofile-fault-tolerance
The Helm Chart for this quickstart contains all the information to build an image from the source code using S2I on Java 17:
build:
uri: https://github.com/wildfly/quickstart.git
ref: main
contextDir: microprofile-fault-tolerance
deploy:
replicas: 1
This will create a new deployment on OpenShift and deploy the application.
If you want to see all the configuration elements to customize your deployment you can use the following command:
$ helm show readme wildfly/wildfly
Get the URL of the route to the deployment.
$ oc get route microprofile-fault-tolerance -o jsonpath="{.spec.host}"
Access the application in your web browser using the displayed URL.
The integration tests included with this quickstart, which verify that the quickstart runs correctly, may also be run with the quickstart running on OpenShift.
Note
|
The integration tests expect a deployed application, so make sure you have deployed the quickstart on OpenShift before you begin. |
Run the integration tests using the following command to run the verify
goal with the integration-testing
profile activated and the proper URL:
$ mvn verify -Pintegration-testing -Dserver.host=https://$(oc get route microprofile-fault-tolerance --template='{{ .spec.host }}')
Note
|
The tests are using SSL to connect to the quickstart running on OpenShift. So you need the certificates to be trusted by the machine the tests are run from. |
For Kubernetes, the build with Apache Maven uses an openshift
Maven profile to provision a WildFly server, suitable for running on Kubernetes.
The server provisioning functionality is provided by the WildFly Maven Plugin, and you may find its configuration in the quickstart pom.xml
:
<profile>
<id>openshift</id>
<build>
<plugins>
<plugin>
<groupId>org.wildfly.plugins</groupId>
<artifactId>wildfly-maven-plugin</artifactId>
<configuration>
<discover-provisioning-info>
<version>${version.server}</version>
<context>cloud</context>
</discover-provisioning-info>
<add-ons>...</add-ons>
</configuration>
<executions>
<execution>
<goals>
<goal>package</goal>
</goals>
</execution>
</executions>
</plugin>
...
</plugins>
</build>
</profile>
You may note that unlike the provisioned-server
profile it uses the cloud context which enables a configuration tuned for Kubernetes environment.
The plugin uses WildFly Glow to discover the feature packs and layers required to run the application, and provisions a server containing those layers.
If you get an error or the server is missing some functionality which cannot be auto-discovered, you can download the WildFly Glow CLI and run the following command to see more information about what add-ons are available:
wildfly-glow show-add-ons
This section contains the basic instructions to build and deploy this quickstart to Kubernetes using Helm Charts.
In this example we are using Minikube as our Kubernetes provider. See the Minikube Getting Started guide for how to install it. After installing it, we start it with 4GB of memory.
minikube start --memory='4gb'
The above command should work if you have Docker installed on your machine. If, you are using Podman instead of Docker, you will also need to pass in --driver=podman
, as covered in the Minikube documentation.
Once Minikube has started, we need to enable its registry since that is where we will push the image needed to deploy the quickstart, and where we will tell the Helm charts to download it from.
minikube addons enable registry
In order to be able to push images to the registry we need to make it accessible from outside Kubernetes. How we do this depends on your operating system. All the below examples will expose it at localhost:5000
# On Mac:
docker run --rm -it --network=host alpine ash -c "apk add socat && socat TCP-LISTEN:5000,reuseaddr,fork TCP:$(minikube ip):5000"
# On Linux:
kubectl port-forward --namespace kube-system service/registry 5000:80 &
# On Windows:
kubectl port-forward --namespace kube-system service/registry 5000:80
docker run --rm -it --network=host alpine ash -c "apk add socat && socat TCP-LISTEN:5000,reuseaddr,fork TCP:host.docker.internal:5000"
-
Helm must be installed to deploy the backend on Kubernetes.
Once you have installed Helm, you need to add the repository that provides Helm Charts for WildFly.
$ helm repo add wildfly https://docs.wildfly.org/wildfly-charts/
"wildfly" has been added to your repositories
$ helm search repo wildfly
NAME CHART VERSION APP VERSION DESCRIPTION
wildfly/wildfly ... ... Build and Deploy WildFly applications on OpenShift
wildfly/wildfly-common ... ... A library chart for WildFly-based applications
The backend will be built and deployed on Kubernetes with a Helm Chart for WildFly.
Navigate to the root directory of this quickstart and run the following commands:
mvn -Popenshift package wildfly:image
This will use the openshift
Maven profile we saw earlier to build the application, and create a Docker image containing the WildFly server with the application deployed. The name of the image will be microprofile-fault-tolerance
.
Next we need to tag the image and make it available to Kubernetes. You can push it to a registry like quay.io
. In this case we tag as localhost:5000/microprofile-fault-tolerance:latest
and push it to the internal registry in our Kubernetes instance:
# Tag the image
docker tag microprofile-fault-tolerance localhost:5000/microprofile-fault-tolerance:latest
# Push the image to the registry
docker push localhost:5000/microprofile-fault-tolerance:latest
In the below call to helm install
which deploys our application to Kubernetes, we are passing in some extra arguments to tweak the Helm build:
-
--set build.enabled=false
- This turns off the s2i build for the Helm chart since Kubernetes, unlike OpenShift, does not have s2i. Instead, we are providing the image to use. -
--set deploy.route.enabled=false
- This disables route creation normally performed by the Helm chart. On Kubernetes we will use port-forwards instead to access our application, since routes are an OpenShift specific concept and thus not available on Kubernetes. -
--set image.name="localhost:5000/microprofile-fault-tolerance"
- This tells the Helm chart to use the image we built, tagged and pushed to Kubernetes' internal registry above.
$ helm install microprofile-fault-tolerance -f charts/helm.yaml wildfly/wildfly --wait --timeout=10m0s --set build.enabled=false --set deploy.route.enabled=false --set image.name="localhost:5000/microprofile-fault-tolerance"
NAME: microprofile-fault-tolerance
...
STATUS: deployed
REVISION: 1
This command will return once the application has successfully deployed. In case of a timeout, you can check the status of the application with the following command in another terminal:
kubectl get deployment microprofile-fault-tolerance
The Helm Chart for this quickstart contains all the information to build an image from the source code using S2I on Java 17:
build:
uri: https://github.com/wildfly/quickstart.git
ref: main
contextDir: microprofile-fault-tolerance
deploy:
replicas: 1
This will create a new deployment on Kubernetes and deploy the application.
If you want to see all the configuration elements to customize your deployment you can use the following command:
$ helm show readme wildfly/wildfly
To be able to connect to our application running in Kubernetes from outside, we need to set up a port-forward to the microprofile-fault-tolerance
service created for us by the Helm chart.
This service will run on port 8080
, and we set up the port forward to also run on port 8080
:
kubectl port-forward service/microprofile-fault-tolerance 8080:8080
The server can now be accessed via http://localhost:8080
from outside Kubernetes. Note that the command to create the port-forward will not return, so it is easiest to run this in a separate terminal.
The integration tests included with this quickstart, which verify that the quickstart runs correctly, may also be run with the quickstart running on Kubernetes.
Note
|
The integration tests expect a deployed application, so make sure you have deployed the quickstart on Kubernetes before you begin. |
Run the integration tests using the following command to run the verify
goal with the integration-testing
profile activated and the proper URL:
$ mvn verify -Pintegration-testing -Dserver.host=http://localhost:8080