The helloworld-mdb
quickstart uses JMS and EJB Message-Driven Bean (MDB) to create and deploy JMS topic and queue resources in WildFly.
The helloworld-mdb
quickstart demonstrates the use of JMS and EJB Message-Driven Bean in WildFly Application Server.
This project creates two JMS resources:
-
A queue named
HELLOWORLDMDBQueue
bound in JNDI asjava:/queue/HELLOWORLDMDBQueue
-
A topic named
HELLOWORLDMDBTopic
bound in JNDI asjava:/topic/HELLOWORLDMDBTopic
The application this project produces is designed to be run on WildFly Application Server 35 or later.
All you need to build this project is Java SE 17.0 or later, and Maven 3.6.0 or later. See Configure Maven to Build and Deploy the Quickstarts to make sure you are configured correctly for testing the quickstarts.
In the following instructions, replace WILDFLY_HOME
with the actual path to your WildFly installation. The installation path is described in detail here: Use of WILDFLY_HOME and JBOSS_HOME Variables.
When you see the replaceable variable QUICKSTART_HOME, replace it with the path to the root directory of all of the quickstarts.
-
Open a terminal and navigate to the root of the WildFly directory.
-
Start the WildFly server with the full profile by typing the following command.
$ WILDFLY_HOME/bin/standalone.sh -c standalone-full.xml
NoteFor Windows, use the WILDFLY_HOME\bin\standalone.bat
script.
-
Make sure WildFly server is started.
-
Open a terminal and navigate to the root directory of this quickstart.
-
Type the following command to build the quickstart.
$ mvn clean package
-
Type the following command to deploy the quickstart.
$ mvn wildfly:deploy
This deploys the helloworld-mdb/target/helloworld-mdb.war
to the running instance of the server.
You should see a message in the server log indicating that the archive deployed successfully.
Look at the WildFly console or server log and you should see log messages corresponding to the deployment of the message-driven beans and the JMS destinations:
INFO [org.jboss.as.server.deployment] (MSC service thread 1-1) WFLYSRV0027: Starting deployment of "helloworld-mdb.war" (runtime-name: "helloworld-mdb.war")
...
INFO [org.wildfly.extension.messaging-activemq] (MSC service thread 1-4) WFLYMSGAMQ0006: Unbound messaging object to jndi name java:/queue/HELLOWORLDMDBQueue
INFO [org.wildfly.extension.messaging-activemq] (MSC service thread 1-4) WFLYMSGAMQ0002: Bound messaging object to jndi name java:/queue/HELLOWORLDMDBQueue
INFO [org.wildfly.extension.messaging-activemq] (MSC service thread 1-2) WFLYMSGAMQ0006: Unbound messaging object to jndi name java:/topic/HELLOWORLDMDBTopic
INFO [org.wildfly.extension.messaging-activemq] (MSC service thread 1-3) WFLYMSGAMQ0002: Bound messaging object to jndi name java:/topic/HELLOWORLDMDBTopic
INFO [org.jboss.as.ejb3] (MSC service thread 1-4) WFLYEJB0042: Started message driven bean 'HelloWorldQueueMDB' with 'activemq-ra.rar' resource adapter
INFO [org.jboss.as.ejb3] (MSC service thread 1-1) WFLYEJB0042: Started message driven bean 'HelloWorldQTopicMDB' with 'activemq-ra.rar' resource adapter
...
INFO [org.wildfly.extension.undertow] (ServerService Thread Pool -- 87) WFLYUT0021: Registered web context: '/helloworld-mdb' for server 'default-server'
INFO [org.jboss.as.server] (management-handler-thread - 1) WFLYSRV0010: Deployed "helloworld-mdb.war" (runtime-name : "helloworld-mdb.war")
The application will be running at the following URL: http://localhost:8080/helloworld-mdb/ and will send some messages to the queue.
To send messages to the topic, use the following URL: http://localhost:8080/helloworld-mdb/HelloWorldMDBServletClient?topic
Look at the WildFly console or Server log and you should see log messages like the following:
INFO [class org.jboss.as.quickstarts.mdb.HelloWorldQueueMDB] (Thread-9 (ActiveMQ-client-global-threads-1189700957)) Received Message from queue: This is message 5
INFO [class org.jboss.as.quickstarts.mdb.HelloWorldQueueMDB] (Thread-6 (ActiveMQ-client-global-threads-1189700957)) Received Message from queue: This is message 1
INFO [class org.jboss.as.quickstarts.mdb.HelloWorldQueueMDB] (Thread-7 (ActiveMQ-client-global-threads-1189700957)) Received Message from queue: This is message 4
INFO [class org.jboss.as.quickstarts.mdb.HelloWorldQueueMDB] (Thread-5 (ActiveMQ-client-global-threads-1189700957)) Received Message from queue: This is message 2
INFO [class org.jboss.as.quickstarts.mdb.HelloWorldQueueMDB] (Thread-4 (ActiveMQ-client-global-threads-1189700957)) Received Message from queue: This is message 3
This quickstart includes integration tests, which are located under the src/test/
directory. The integration tests verify that the quickstart runs correctly when deployed on the server.
Follow these steps to run the integration tests.
-
Make sure WildFly server is started.
-
Make sure the quickstart is deployed.
-
Type the following command to run the
verify
goal with theintegration-testing
profile activated.$ mvn verify -Pintegration-testing
Instead of using a standard WildFly server distribution, you can alternatively provision a WildFly server to deploy and run the quickstart. The functionality is provided by the WildFly Maven Plugin, and you may find its configuration in the quickstart pom.xml
:
<profile>
<id>provisioned-server</id>
<activation>
<activeByDefault>true</activeByDefault>
</activation>
<build>
<plugins>
<plugin>
<groupId>org.wildfly.plugins</groupId>
<artifactId>wildfly-maven-plugin</artifactId>
<configuration>
<discover-provisioning-info>
<version>${version.server}</version>
</discover-provisioning-info>
<add-ons>...</add-ons>
</configuration>
<executions>
<execution>
<goals>
<goal>package</goal>
</goals>
</execution>
</executions>
</plugin>
...
</plugins>
</build>
</profile>
When built, the provisioned WildFly server can be found in the target/server
directory, and its usage is similar to a standard server distribution, with the simplification that there is never the need to specify the server configuration to be started.
Follow these steps to run the quickstart using the provisioned server.
-
Make sure the server is provisioned.
$ mvn clean package
-
Start the WildFly provisioned server, using the WildFly Maven Plugin
start
goal.$ mvn wildfly:start
-
Type the following command to run the integration tests.
$ mvn verify -Pintegration-testing
-
Shut down the WildFly provisioned server.
$ mvn wildfly:shutdown
On OpenShift, the S2I build with Apache Maven uses an openshift
Maven profile to provision a WildFly server, deploy and run the quickstart in OpenShift environment.
The server provisioning functionality is provided by the WildFly Maven Plugin, and you may find its configuration in the quickstart pom.xml
:
<profile>
<id>openshift</id>
<build>
<plugins>
<plugin>
<groupId>org.wildfly.plugins</groupId>
<artifactId>wildfly-maven-plugin</artifactId>
<configuration>
<discover-provisioning-info>
<version>${version.server}</version>
<context>cloud</context>
</discover-provisioning-info>
<add-ons>...</add-ons>
</configuration>
<executions>
<execution>
<goals>
<goal>package</goal>
</goals>
</execution>
</executions>
</plugin>
...
</plugins>
</build>
</profile>
You may note that unlike the provisioned-server
profile it uses the cloud context which enables a configuration tuned for OpenShift environment.
The plugin uses WildFly Glow to discover the feature packs and layers required to run the application, and provisions a server containing those layers.
If you get an error or the server is missing some functionality which cannot be auto-discovered, you can download the WildFly Glow CLI and run the following command to see more information about what add-ons are available:
wildfly-glow show-add-ons
This section contains the basic instructions to build and deploy this quickstart to WildFly for OpenShift or WildFly for OpenShift Online using Helm Charts.
-
You must be logged in OpenShift and have an
oc
client to connect to OpenShift -
Helm must be installed to deploy the backend on OpenShift.
Once you have installed Helm, you need to add the repository that provides Helm Charts for WildFly.
$ helm repo add wildfly https://docs.wildfly.org/wildfly-charts/
"wildfly" has been added to your repositories
$ helm search repo wildfly
NAME CHART VERSION APP VERSION DESCRIPTION
wildfly/wildfly ... ... Build and Deploy WildFly applications on OpenShift
wildfly/wildfly-common ... ... A library chart for WildFly-based applications
Log in to your OpenShift instance using the oc login
command.
The backend will be built and deployed on OpenShift with a Helm Chart for WildFly.
Navigate to the root directory of this quickstart and run the following command:
$ helm install helloworld-mdb -f charts/helm.yaml wildfly/wildfly --wait --timeout=10m0s
NAME: helloworld-mdb
...
STATUS: deployed
REVISION: 1
This command will return once the application has successfully deployed. In case of a timeout, you can check the status of the application with the following command in another terminal:
oc get deployment helloworld-mdb
The Helm Chart for this quickstart contains all the information to build an image from the source code using S2I on Java 17:
build:
uri: https://github.com/wildfly/quickstart.git
ref: main
contextDir: helloworld-mdb
deploy:
replicas: 2
This will create a new deployment on OpenShift and deploy the application.
If you want to see all the configuration elements to customize your deployment you can use the following command:
$ helm show readme wildfly/wildfly
Get the URL of the route to the deployment.
$ oc get route helloworld-mdb -o jsonpath="{.spec.host}"
Access the application in your web browser using the displayed URL.
The integration tests included with this quickstart, which verify that the quickstart runs correctly, may also be run with the quickstart running on OpenShift.
Note
|
The integration tests expect a deployed application, so make sure you have deployed the quickstart on OpenShift before you begin. |
Run the integration tests using the following command to run the verify
goal with the integration-testing
profile activated and the proper URL:
$ mvn verify -Pintegration-testing -Dserver.host=https://$(oc get route helloworld-mdb --template='{{ .spec.host }}')
Note
|
The tests are using SSL to connect to the quickstart running on OpenShift. So you need the certificates to be trusted by the machine the tests are run from. |
For Kubernetes, the build with Apache Maven uses an openshift
Maven profile to provision a WildFly server, suitable for running on Kubernetes.
The server provisioning functionality is provided by the WildFly Maven Plugin, and you may find its configuration in the quickstart pom.xml
:
<profile>
<id>openshift</id>
<build>
<plugins>
<plugin>
<groupId>org.wildfly.plugins</groupId>
<artifactId>wildfly-maven-plugin</artifactId>
<configuration>
<discover-provisioning-info>
<version>${version.server}</version>
<context>cloud</context>
</discover-provisioning-info>
<add-ons>...</add-ons>
</configuration>
<executions>
<execution>
<goals>
<goal>package</goal>
</goals>
</execution>
</executions>
</plugin>
...
</plugins>
</build>
</profile>
You may note that unlike the provisioned-server
profile it uses the cloud context which enables a configuration tuned for Kubernetes environment.
The plugin uses WildFly Glow to discover the feature packs and layers required to run the application, and provisions a server containing those layers.
If you get an error or the server is missing some functionality which cannot be auto-discovered, you can download the WildFly Glow CLI and run the following command to see more information about what add-ons are available:
wildfly-glow show-add-ons
This section contains the basic instructions to build and deploy this quickstart to Kubernetes using Helm Charts.
In this example we are using Minikube as our Kubernetes provider. See the Minikube Getting Started guide for how to install it. After installing it, we start it with 4GB of memory.
minikube start --memory='4gb'
The above command should work if you have Docker installed on your machine. If, you are using Podman instead of Docker, you will also need to pass in --driver=podman
, as covered in the Minikube documentation.
Once Minikube has started, we need to enable its registry since that is where we will push the image needed to deploy the quickstart, and where we will tell the Helm charts to download it from.
minikube addons enable registry
In order to be able to push images to the registry we need to make it accessible from outside Kubernetes. How we do this depends on your operating system. All the below examples will expose it at localhost:5000
# On Mac:
docker run --rm -it --network=host alpine ash -c "apk add socat && socat TCP-LISTEN:5000,reuseaddr,fork TCP:$(minikube ip):5000"
# On Linux:
kubectl port-forward --namespace kube-system service/registry 5000:80 &
# On Windows:
kubectl port-forward --namespace kube-system service/registry 5000:80
docker run --rm -it --network=host alpine ash -c "apk add socat && socat TCP-LISTEN:5000,reuseaddr,fork TCP:host.docker.internal:5000"
-
Helm must be installed to deploy the backend on Kubernetes.
Once you have installed Helm, you need to add the repository that provides Helm Charts for WildFly.
$ helm repo add wildfly https://docs.wildfly.org/wildfly-charts/
"wildfly" has been added to your repositories
$ helm search repo wildfly
NAME CHART VERSION APP VERSION DESCRIPTION
wildfly/wildfly ... ... Build and Deploy WildFly applications on OpenShift
wildfly/wildfly-common ... ... A library chart for WildFly-based applications
The backend will be built and deployed on Kubernetes with a Helm Chart for WildFly.
Navigate to the root directory of this quickstart and run the following commands:
mvn -Popenshift package wildfly:image
This will use the openshift
Maven profile we saw earlier to build the application, and create a Docker image containing the WildFly server with the application deployed. The name of the image will be helloworld-mdb
.
Next we need to tag the image and make it available to Kubernetes. You can push it to a registry like quay.io
. In this case we tag as localhost:5000/helloworld-mdb:latest
and push it to the internal registry in our Kubernetes instance:
# Tag the image
docker tag helloworld-mdb localhost:5000/helloworld-mdb:latest
# Push the image to the registry
docker push localhost:5000/helloworld-mdb:latest
In the below call to helm install
which deploys our application to Kubernetes, we are passing in some extra arguments to tweak the Helm build:
-
--set build.enabled=false
- This turns off the s2i build for the Helm chart since Kubernetes, unlike OpenShift, does not have s2i. Instead, we are providing the image to use. -
--set deploy.route.enabled=false
- This disables route creation normally performed by the Helm chart. On Kubernetes we will use port-forwards instead to access our application, since routes are an OpenShift specific concept and thus not available on Kubernetes. -
--set image.name="localhost:5000/helloworld-mdb"
- This tells the Helm chart to use the image we built, tagged and pushed to Kubernetes' internal registry above.
$ helm install helloworld-mdb -f charts/helm.yaml wildfly/wildfly --wait --timeout=10m0s --set build.enabled=false --set deploy.route.enabled=false --set image.name="localhost:5000/helloworld-mdb"
NAME: helloworld-mdb
...
STATUS: deployed
REVISION: 1
This command will return once the application has successfully deployed. In case of a timeout, you can check the status of the application with the following command in another terminal:
kubectl get deployment helloworld-mdb
The Helm Chart for this quickstart contains all the information to build an image from the source code using S2I on Java 17:
build:
uri: https://github.com/wildfly/quickstart.git
ref: main
contextDir: helloworld-mdb
deploy:
replicas: 2
This will create a new deployment on Kubernetes and deploy the application.
If you want to see all the configuration elements to customize your deployment you can use the following command:
$ helm show readme wildfly/wildfly
To be able to connect to our application running in Kubernetes from outside, we need to set up a port-forward to the helloworld-mdb
service created for us by the Helm chart.
This service will run on port 8080
, and we set up the port forward to also run on port 8080
:
kubectl port-forward service/helloworld-mdb 8080:8080
The server can now be accessed via http://localhost:8080
from outside Kubernetes. Note that the command to create the port-forward will not return, so it is easiest to run this in a separate terminal.
The integration tests included with this quickstart, which verify that the quickstart runs correctly, may also be run with the quickstart running on Kubernetes.
Note
|
The integration tests expect a deployed application, so make sure you have deployed the quickstart on Kubernetes before you begin. |
Run the integration tests using the following command to run the verify
goal with the integration-testing
profile activated and the proper URL:
$ mvn verify -Pintegration-testing -Dserver.host=http://localhost:8080
You can choose to deploy and run this quickstart in a managed domain or on a standalone server. The sections below describe how to configure and start each type of server.
Before you begin:
-
If it is running, stop the WildFly server.
-
If you plan to test using a standalone server, back up the standalone server configuration file:
WILDFLY_HOME/standalone/configuration/standalone-full-ha.xml
-
If you plan to test using a managed domain, back up the following files:
WILDFLY_HOME/domain/configuration/domain.xml WILDFLY_HOME/domain/configuration/host.xml
After you have completed testing this quickstart, you can replace these files to restore the server to its original configuration.
You configure the server by running the install-domain.cli
script provided in the root directory of this quickstart.
-
Open a terminal and navigate to the root of the WildFly directory.
-
Start the WildFly managed domain by typing the following command.
$ WILDFLY_HOME/bin/domain.sh
NoteFor Windows, use the WILDFLY_HOME\bin\domain.bat
script.
-
Review the
install-domain.cli
file in the root of this quickstart directory. This script creates the server group and servers and configures messaging clustering for testing this quickstart. You will note it does the following:-
Stops the servers
-
Creates a server-group to test ActiveMQ Clustering
-
Adds 2 servers to the server-group
-
Configures ActiveMQ clustering in the full-ha profile
-
Deploys the
helloworld-mdb.war
archive -
Restarts the servers.
-
-
Open a terminal, navigate to the root directory of this quickstart, and run the following command to run the script:
$ WILDFLY_HOME/bin/jboss-cli.sh --connect --file=install-domain.cli
NoteFor Windows, use the WILDFLY_HOME\bin\jboss-cli.bat
script.You should see
"outcome" ⇒ "success"
for all of the commands. -
Restart the server in a managed domainas described above.
If you choose to run standalone servers instead of a managed domain, you will need two instances of the application server.
Application server 2 must be started with a port offset parameter provided to the startup script as -Djboss.socket.binding.port-offset=100
.
Since both application servers must be configured in the same way, you must configure the first server and then clone it.
-
Open a terminal and navigate to the root of the WildFly directory.
-
Start the WildFly server with the standalone full HA profile, which supports messaging and high availability, by typing the following command.
$ WILDFLY_HOME/bin/standalone.sh -c standalone-full-ha.xml
NoteFor Windows, use the WILDFLY_HOME\bin\standalone.bat
script.
-
Review the
install-standalone.cli
file in the root of this quickstart directory. This script configures clustering for a standalone server. You will note it does the following:-
Enables console logging. By default, the full HA profile does not log to the console, so this script enables it.
-
Enables clustering and sets a cluster password
-
Enables clustering in the RemoteConnectionFactory
-
Deploys the
helloworld-mdb.war
archive -
Reloads the server configuration
-
-
Open a terminal, navigate to the root directory of this quickstart, and run the following command to run the script:
$ WILDFLY_HOME_1/bin/jboss-cli.sh --connect --file=install-standalone.cli
NoteFor Windows, use the WILDFLY_HOME_1\bin\jboss-cli.bat
script.
You should see "outcome" ⇒ "success"
for all of the commands.
After you have successfully configured the server, you must make a copy of this WildFly directory structure to use for the second server.
-
Stop the server.
-
Make a copy of this WildFly directory structure to use for the second server.
-
Remove the following directories from the cloned instance:
WILDFLY_HOME_2/standalone/data/activemq/bindings/ WILDFLY_HOME_2/standalone/data/activemq/journal/ WILDFLY_HOME_2/standalone/data/activemq/largemessages/
When you start the servers, you must pass the cluster password on the command line to avoid the warning AMQ222186: unable to authorise cluster control
.
$ WILDFLY_HOME_1/bin/standalone.sh -c standalone-full-ha.xml
$ WILDFLY_HOME_2/bin/standalone.sh -c standalone-full-ha.xml -Djboss.socket.binding.port-offset=100
Note
|
For Windows, use the WILDFLY_HOME_1\bin\standalone.bat and WILDFLY_HOME_2\bin\standalone.bat scripts.
|
The application will be running at the following URL: http://localhost:9080/helloworld-mdb/HelloWorldMDBServletClient.
It will send some messages to the queue.
To send messages to the topic, use the following URL: http://localhost:9080/helloworld-mdb/HelloWorldMDBServletClient?topic
The application will be running at the following URL: http://localhost:8080/helloworld-mdb/HelloWorldMDBServletClient.
It will send some messages to the queue.
To send messages to the topic, use the following URL: http://localhost:8080/helloworld-mdb/HelloWorldMDBServletClient?topic
Look at the WildFly server console or log and you should see log messages like the following:
[Server:quickstart-messagingcluster-node1] INFO [class org.jboss.as.quickstarts.mdb.HelloWorldQueueMDB] (Thread-8 (ActiveMQ-client-global-threads-1067469862)) Received Message from queue: This is message 1
[Server:quickstart-messagingcluster-node1] INFO [class org.jboss.as.quickstarts.mdb.HelloWorldQueueMDB] (Thread-8 (ActiveMQ-client-global-threads-1067469862)) Received Message from queue: This is message 3
[Server:quickstart-messagingcluster-node1] INFO [class org.jboss.as.quickstarts.mdb.HelloWorldQueueMDB] (Thread-6 (ActiveMQ-client-global-threads-1067469862)) Received Message from queue: This is message 5
[Server:quickstart-messagingcluster-node2] INFO [class org.jboss.as.quickstarts.mdb.HelloWorldQueueMDB] (Thread-8 (ActiveMQ-client-global-threads-1771031398)) Received Message from queue: This is message 2
[Server:quickstart-messagingcluster-node2] INFO [class org.jboss.as.quickstarts.mdb.HelloWorldQueueMDB] (Thread-7 (ActiveMQ-client-global-threads-1771031398)) Received Message from queue: This is message 4
Note that the logging indicates messages have arrived from both node 1, quickstart-messagingcluster-node1
, as well as node 2, quickstart-messagingcluster-node2
.
You will see the following warnings in the server logs. You can ignore these warnings as they are intended for production servers.
WARNING [org.jgroups.protocols.UDP] (Thread-0 (ActiveMQ-server-ActiveMQServerImpl::serverUUID=c79278db-56e6-11e5-af50-69dd76236ee8-1573164340)) JGRP000015: the send buffer of socket DatagramSocket was set to 1MB, but the OS only allocated 212.99KB. This might lead to performance problems. Please set your max send buffer in the OS correctly (e.g. net.core.wmem_max on Linux)
WARNING [org.jgroups.protocols.UDP] (Thread-0 (ActiveMQ-server-ActiveMQServerImpl::serverUUID=c79278db-56e6-11e5-af50-69dd76236ee8-1573164340)) JGRP000015: the receive buffer of socket DatagramSocket was set to 20MB, but the OS only allocated 212.99KB. This might lead to performance problems. Please set your max receive buffer in the OS correctly (e.g. net.core.rmem_max on Linux)
WARNING [org.jgroups.protocols.UDP] (Thread-0 (ActiveMQ-server-ActiveMQServerImpl::serverUUID=c79278db-56e6-11e5-af50-69dd76236ee8-1573164340)) JGRP000015: the send buffer of socket MulticastSocket was set to 1MB, but the OS only allocated 212.99KB. This might lead to performance problems. Please set your max send buffer in the OS correctly (e.g. net.core.wmem_max on Linux)
WARNING [org.jgroups.protocols.UDP] (Thread-0 (ActiveMQ-server-ActiveMQServerImpl::serverUUID=c79278db-56e6-11e5-af50-69dd76236ee8-1573164340)) JGRP000015: the receive buffer of socket MulticastSocket was set to 25MB, but the OS only allocated 212.99KB. This might lead to performance problems. Please set your max receive buffer in the OS correctly (e.g. net.core.rmem_max on Linux)
After the server has been running for a period of time, you might see the following warnings in the server log, which are followed by a stacktrace. You can ignore these warnings as this is is a known issue and is harmless. See JBEAP-794 for more information.
WARN [org.infinispan.topology.ClusterTopologyManagerImpl] (transport-thread--p15-t6) ISPN000197: Error updating cluster member list: org.infinispan.util.concurrent.TimeoutException: Replication timeout for <application-name>
When you are finished testing, use the following instructions to undeploy the quickstart.
-
Make sure you have started the WildFly server in a managed domain as described above.
-
Open a terminal, navigate to the root directory of this quickstart, and run the following command to undeploy the helloworld-mdb quickstart:
$ WILDFLY_HOME/bin/jboss-cli.sh --connect --file=undeploy-domain.cli
NoteFor Windows, use the WILDFLY_HOME\bin\jboss-cli.bat
script.
-
Make sure you start the WildFly server as described above.
-
Open a terminal, navigate to the root directory of this quickstart, and run the following command to undeploy the helloworld-mdb quickstart:
$ WILDFLY_HOME/bin/jboss-cli.sh --connect --file=undeploy-standalone.cli
NoteFor Windows, use the WILDFLY_HOME\bin\jboss-cli.bat
script.
You can remove the domain configuration by manually restoring the back-up copies the configuration files or by running the JBoss CLI Script.
Important
|
This method ensures the server is restored to its prior configuration. |
-
If it is running, stop the WildFly server.
-
Restore the
WILDFLY_HOME/domain/configuration/domain.xml
andWILDFLY_HOME/domain/configuration/host.xml
files with the back-up copies of the files. Make sure you replaceWILDFLY_HOME
with the path to your server.
Important
|
This script returns the server to the default configuration, which might not match the server configuration that existed prior to testing this quickstart. If you were not running with the default configuration before testing this quickstart, you should follow the intructions above to manually restore the domain configuration to its previous state. |
-
Start the WildFly server by typing the following:
$ WILDFLY_HOME/bin/jboss-cli.sh --connect --file=install-domain.cli
NoteFor Windows, use the WILDFLY_HOME\bin\jboss-cli.bat
script. -
Open a new terminal, navigate to the root directory of this quickstart, and run the following command, replacing
WILDFLY_HOME
with the path to your server.$ WILDFLY_HOME/bin/jboss-cli.sh --connect --file=remove-domain.cli
NoteFor Windows, use the WILDFLY_HOME\bin\jboss-cli.bat
script.This script removes the server configuration that was done by the
install-domain.cli
script. You should see the following result following the script commands.The batch executed successfully
Important
|
If the
Simply wait a few seconds and run the command a second time. |
You can remove the domain configuration by manually restoring the back-up copies the configuration files or by running the JBoss CLI Script.
Important
|
This method ensures the server is restored to its prior configuration. |
-
If they are running, stop both WildFly servers.
-
Restore the
WILDFLY_HOME_1/standalone/configuration/standalone-full-ha.xml
file with the back-up copies of the file. Make sure you replaceWILDFLY_HOME_1
with the path to your server.
Important
|
This script returns the server to the default configuration, which might not match the server configuration that existed prior to testing this quickstart. If you were not running with the default configuration before testing this quickstart, you should follow the intructions above to manually restore the standalone configuration to its previous state. |
-
Start the WildFly server by typing the following:
$ WILDFLY_HOME_1/bin/standalone.sh -c standalone-full-ha.xml
NoteFor Windows, use the WILDFLY_HOME_1\bin\standalone.bat
script. -
Open a new terminal, navigate to the root directory of this quickstart, and run the following command, replacing
WILDFLY_HOME_1
with the path to your server.$ WILDFLY_HOME_1/bin/jboss-cli.sh --connect --file=remove-standalone.cli
NoteFor Windows, use the WILDFLY_HOME_1\bin\jboss-cli.bat
script.This script removes the server configuration that was done by the
install-standalone.cli
script. You should see the following result following the script commands:The batch executed successfully