JMS performance and tuning in WAS Liberty

Why these JMS blog posts?

I had a “quick” question from someone, “can I configure JMS to reduce CPU usage and improve performance?”. It was was wet Monday in Orkney (north of Scotland) and I thought I would spent an hour looking into it. A few weeks later, I am much wiser, and have some answers to the question. The Knowledge Center has a lot of information, mostly is useful, mostly accurate, some information is missing and some assumes that you are very familiar with the product.

I also found that that the Java people tend to use different words for familiar concepts, so I had to struggle with this different language.

Below are the blog posts I wrote on getting JMS working on Ubuntu 18.04 with MQ V9.


Paul Titheridge of the IBM MQ change team gave me a huge amount of help with this document – many of the words are his – any errors are all mine.

Tuning summary

  1. Use a connection pool for each business application so they do not interact.
  2. If you are using client connections specify applicationname  so you can see which connection pool is being for the connections and queue handles.
  3. Ensure each jmsActivationSpec has a connection pool with the right size, and labelled with the applicationname
  4. Use the display conn, and dis qstatus type(handles) commands to show the appltag (=applicationname) and userid, and pid (Process id) to identify connections coming from the web server.
  5. Use the MQ statistics to see the number of connects and disconnects, the appltag and userid

I’ll cover some basics before going into detail.  At the bottom I cover the (partial) success I had in tuning the configuration.

10 second performance background

  1. The MQCONN and MQDISC requests are expensive operations. You should use them as infrequently as possible.
  2. If you want to change the userid that your thread is using to communicate with MQ, you have to do MQDISC and then MQCONN with the new userid and password.

If you can keep a queue open, it uses less CPU than frequent opens and closes.

30 second background on JMS in a web server

There are three common patterns for JMS applications

  1. Connect once and stay connected all day doing sends and receives
  2. A listener task gets a message from the queue and passes it to an Message Driven Bean(MDB) which does all of the application work. It usually connects to MQ and sends a reply message.
  3. An application uses the web server and runs a transaction which invokes a program to do the work.

Scenarios in more detail

Connect once

This does one MQCONN/MQDISC

Message Driven Bean(MDB)

  • The listener thread is a long running thread which connects to the queue manager, and loops getting the message from the queue and passing it to a MDB. You specify the MDB name, but can have many instances of the MDB running.
  • The MDB does not need a connection to be given the message.
  • The MDB is given the message, and typically puts a reply back to the originator, so the MDB needs a connection to do this. If it consumes the message, and does no other MQ work, it does not need an MQ connection.
  • These MDB instances all run with the same userid (and password) This logically needs to do an MQCONN … MQDISC. As this is expensive, there is a capability called connection pooling (depending on your JMS provider). With Connection Pooling, when an application issues MQDISC, the request is not passed to MQ, but the connection is saved. Next time an application does an MQCONN, it can reuse this connection and so save a lot of CPU. In a similar way MQCLOSE may not always pass the request through to MQ, but keep the queue open for the next MQOPEN request. I do not think Liberty does this. With connection pooling you can usually specify the maximum number of connections that can be in the pool. If you look at the accounting data, you will see an application did one open of the reply to queue, and many puts. This shows connection pooling was used, and the queue was held open.
  • If the number of connections in use is at the maximum limit for the connection pool, then any new request will be queued.
  • The jmsActivationSpec has a parameter maxEndpoints which defaults to 500 .  This is the maximum number of MDB instances that can run concurrently.   The connectionFactory used by the MDBs need to have at least this maxEndpoints of connections in the pool.

A program running in the web server.

Typically a request is entered into the URL of a web browser for example http://localhost:9080/WMQ_IVT/

The string WMQ_IVT maps to a program which runs and processes the request. For example receive a message from a queue, and send a reply back.

You often have to sign on to be able to use the transaction, but a userid and password may be specified for the connection.

Connection pooling can be used, but it is more complex as the connection pooling code will search for an existing connection in the pool with the same userid and password (the same “subject”), and will use the connection if found. If one is not found, then it takes a unused connection from the pool (with the previous user’s userid and password) issues MQDISC to release the connection with the “wrong” userid, then MQCONN with the new userid and password

If the number of connections in use is at the maximum limit for the connection pool, then any new request will be queued.

Configuring the Web Server

  • When you configure your web server you define connection factory (CF) information. This has parameters such as maximum pool size.
  • An application specifies which connection factory to use. Note: Some applications define all the parameters in the program, and do not use a connection factory.
  • You can specify multiple connection factories.

What can possibly go wrong?

If the PAYROLL application is using the same connection factory as the INQUIRY application , and the INQUIRY applications instances are using all of the connections in the pool, then a PAYROLL instance will have to wait until there is a free connection. This is not good.

You should isolate the CF for different business applications to provide application isolation.

If the connection factory has a maximum size of 2 and the applications use different userids (and passwords) you can get

  • application INQUIRY with userid=INQUSER running twice. These end, and the connections are put back into the pool.
  • PAYROLL runs, with userid=PAYRUSER. There are no free connections with userid PAYRUSER. Under the covers the code has to obtain a free connection, and issue MQDISC + MQCONN. This program ends.
  • If PAYROLL runs again, it can reuse the connection in the pool.
  • If INQ runs next it finds there is a connection with the INQUSER, so does not need to do MQDISC and MQCONN.

So depending on the size of the pool there may be MQCONN and MQDISC to process different userids. Over time you may get lots of MQCONN and MQDISC when there is no connection with the required subject.

So again PAYROLL transaction is impacted by the INQUIRY transaction. In this example, making the connection pool larger(4) would improve the performance. It may be hard to decide how big to make the pool because/of the unknown number of userids that are being used. Making the maximum connection pool size very large can impact other applications if these other applications are unable to connect to MQ because of the MQ connection limit.

It will be easier to manage if each application has its own connection factory.

How many MDBs can be running concurrently?

You can configure the jmsActivationSpec to specify the maximum number of MDBs running at the same time. If each of these needs to put a reply, then the connection pool for these MDBs need to have a connection, so the connection factory being used needs to have the same capacity or larger than that on the jmsActivationSpec.

So Check the the connection pool used by MDBs have enough capacity for the size of the jmsActivationSpec connection pool.

MQ can limit the number of connections it has.

See MaxChannels in Attributes of channels stanzas the default is 200!

qm.ini -> Channel ->  maxChannels

 Maximum instances of server-connection channel connections (MAXINST) the default is 999999999.

Maximum instances per client (MAXINSTC) the default is 999999999.

How can I tell what is going on ?

You can get information about the queue manager, from the queue manager, and information about the web server from the web server and the queue manager.

Getting information from the queue manager.
On distributed MQ you can use MQ statistics to display the number of MQCONNs and MQDISCs in a time interval. If the number of these is low, you may not have a problem or you may have little activity.

Look at the MQ accounting data, this contains information about each transaction. Interesting fields are

“applName”: “java”, the applications will be java if using bindings mode, or the applicationName if using a client – and applicationName has been specified.

“processId”: 18560, in bindings mode, this is the process id of the web server instance.  When using client connections, this is the process id of the channel processing program, /opt/mqm/bin/amqrmppa.   There can be more than one of these depending on the number of connections

“userIdentifier”: “colinpaice”, the userid running the work

“startDateTime”: “2018-10-26T10:36:08”, when the MQCONN happened

“endtDateTime”: “2018-10-26T10:36:08”, when the MQDISC happened.

If you see the duration endDateTime – startDateTime is short – under 10 seconds, most probably you do not have connection pooling.

If you are using connection pooling on WAS liberty, you can specify the maxIdleTime so you would typically expect the duration of each record to be longer than this.

Note. If the configuration is changed, all threads may be closed down and restarted, in this case the duration may be short.

Getting started with information from the display commands.

To investigate the connection and handle usage I did the following.

  1. Specify applicationName in the properties.wmqJms definitions. This only works for client (not binding ) connections, so specify a client conection.
  2. Use a different applicationName for each connectionFactory and ActivationSpec


With this I could then use runmqsc to display some information. For example

DIS QSTATUS(IVT*) type(handle)
AMQ8450I: Display queue status details.


AMQ8450I: Display queue status details.

AMQ8450I: Display queue status details.

AMQ8450I: Display queue status details.

This shows

  1. There is a definitions using applicationName=”jmsASIVTCF”. There are three instance in use, all with the same userid ibmsys1
  2. There is a definition using applicationName=”JMSIVTCFA”. There is an instance, with a userid of colinpaice
  3. These have a process id of 8037


dis conn(*) appltag where(pid,eq,8037)


AMQ8276I: Display Connection details.

AMQ8276I: Display Connection details.

AMQ8276I: Display Connection details.

AMQ8276I: Display Connection details.

AMQ8276I: Display Connection details.

AMQ8276I: Display Connection details.

AMQ8276I: Display Connection details.

Which shows there are pooled connections as they are long lasting.

We can see what queues have been opened by the Liberty instance.

  dis conn(*) type(handle) where(pid,eq,8037)

AMQ8276I: Display Connection details.

AMQ8276I: Display Connection details.

AMQ8276I: Display Connection details.
AMQ8276I: Display Connection details.
AMQ8276I: Display Connection details.
AMQ8276I: Display Connection details.

AMQ8276I: Display Connection details.

So we can see that there are 3 connections with a queue open.

We can display the number of connections by the appltag ( applicationName)

  dis conn(*) where(appltag,eq,IVTCF) userid

AMQ8276I: Display Connection details.

AMQ8276I: Display Connection details.

AMQ8276I: Display Connection details.

AMQ8276I: Display Connection details.

So we can see these all have the same userid.

I used some python (which I will publish at a later date) to take the output from the command and summarize it into queue, userid, appltag, and count. For example

q=CP0000,user=colinpaice,appltag=oemput, 1
q=CP0001,user=colinpaice,appltag=COLINMDBCF, 38
q=CP0001,user=ibmsys1,appltag=oemput, 1

oemput is a batch program, the fromQMAJMS and COLINMDBCF are from the applicationName of the  Liberty Connection Factories

Getting information from the Web Server.

Use the tools available with the Web Server to display information about the Connection Pools.

Most J2EE application servers provide some performance metric indicators which monitor connection pool usage. These metrics are usually exposed by either JMX or PMI, and include things such as how long an application waits for a connection from the connection pool, the average time that a connection in the pool was used by an application and so on. The metrics can be very useful in determining whether there is a lot of contention on a connection pool, and if the size of the connection pool needs to be increased

For example with WAS Liberty, you can use jconsole to connect to the liberty instance and display information about

This has a list of connection factories, and you can use operations→ showPoolContents. For example

  ManagedConnection@7ce98dcb=ActiveInTransaction thread=null transaction=2 connectionHandles=0


So we can see one connection is currently in use, and there are two available for reuse.

There are no transactions waiting.

How to tune MQ for JMS

The answer is that most of the tuning is outside of MQ, not within MQ.

Within an application, a JMS connectionFactory is used to define the connection. This includes information on how to connect to the queue manager, for example client or bindings.

As MQCONN and MQDISC are expensive requests, JMS can use connection pooling, where a connection is release back into a pool. A request for a connection can use a free connection in the pool – if one is available.

In a well set up, balanced MQ environment, there will be few MQCONNs an MQDISCs a second, as most of the requests should come from the pool. If a lot of work comes in, you may see the number of connections increase to the maximum. As the work drops off, the number of connections will drop down to a steady state.

To use connection pooling you configure the WebSphere® MQ resource adapter to specify

  • maxConnections
  • reconnectionRetryCount
  • reconnectionRetryInterval
  • startupRetryCount
  • startupRetryInterval

See here.

If you are using the SI bus in liberty as your queuing infrastructure, you can prevent spikes in requests for connections when a log of work arrives, you can specify

  • Surge threshold
  • Surge creation interval.

Which limit how fast the pool grows.

These are not supported for MQ.

If the connection pool is not being used… what do I need to change to get it to be used

  • When running in WebSphere Application Server, connection pooling is provided for free providing an application is using a connection factory that is defined in the WebSphere Application Server JNDI repository.
  • I could no find much information about connection pooling in WebLogic Server.
  • JBOSS documentation refers to Connection pooling – see here  


What do the performance levers do?

With most machinery, there are usually levers to make it go faster, and levers to make it go slower.

Before publishing this blog, I thought I had better check that I had covered the basic Liberty tuning for MQ. I found some levers worked, some levers did not work, and there are  some bottlenecks or levers which are hidden. I used Google search to go through the available documentation and blog posts.

Basic scenario.

I used an MDB based on the code below. Note the Thread.sleep() to add a 2 second wait in the program.

The class

public class mdb implements javax.ejb.MessageDrivenBean,
MessageListener {

with the onMessage method to do the work

public void onMessage(Message message) {
try {
  InitialContext ctx = new InitialContext(); 
  ConnectionFactory cf = (ConnectionFactory) ctx.lookup(“COLINMDB”);
  connection = cf.createConnection();
  session = connection.createSession(false, 1);
  producer = session.createProducer(dest);
  TextMessage response = session.createTextMessage(
      "Colins Reply”);
  Thread.sleep(2000); // this many milliseconds
} catch (Exception je) {}
finally {}

The program uses conectionFactory COLINMDB when sending the reply back.

There was additional code to report when an MDB started, and to print out statistics when it was shut down.   This was done using the methods

public void ejbCreate() {// print out information  }
public void ejbRemove()
throws EJBException
 {   // print out statistics} 

My program has code to time the various requests, and reports the data during “ejbRemove” processing.

Driving program.

I use a program to put 100 non persistent messages, then waited for the replies and then looped. So I would expect to have 100 instances of the MDB running at a time.

Server.xml file

In the server.xml file, I started by making the tuning parameters very large (typically over 200)

Initial results

When I ran the workload I had many messages on the input queue but achieved

  • number of MDBs running                                                     40
  • number of queue handles for the input queue appltag   41
  • number of queue handles for the output queue appltag 30

This was a disappointing surprise.

  1. I expected the number of MDBs to be close to 100. Even when running multiple jobs, and the queue depth of the input queue was over 200, I could not get more than 40-45 MDBs running. There is clearly a bottleneck or hidden lever here.
  2. The number of handles for the input queue was larger than the number of handles for the output queue. So there is another bottleneck or hidden lever here.

I would expect the number of output handles to peak at the value of the input handles. This is true for when the number of MDBs is under about 20, but not for larger numbers

What did the levers do?

jmsActivationSpec maxEndPoints

<jmsActivationSpec id=”MDB/MDB/MDB”   maxEndpoints=”100″ >

Changing maxEndpoints from 100 to 25 gave

  • number of MDBs running                                                    25
  • number of queue handles for the input queue appltag   25
  • number of queue handles for the output queue appltag 25

The maxEndpoints lever clearly works.

jmsActivationSpec, properties.wmqJms maxPoolDepth

<jmsActivationSpec id="MDB/MDB/MDB"
<properties.wmqJms maxPoolDepth="800"

Setting maxPoolDepth to 7 gave

  • number of MDBs running                                                    7
  • number of queue handles for the input queue appltag   7
  • number of queue handles for the output queue appltag 7

So this clearly works.

I do not know the difference between maxEndPoints, and maxPoolDepth.  They both need to be set to a high value to work.

Output queue, jmsConnectionFactory, Connection Manager maxPoolSize

<jmsConnectionFactory id="COLINMDB" 
   <connectionManager maxPoolSize="200"…

Setting maxPoolSize to 13 gave

  • number of MDBs running 100
  • number of queue handles for the input queue appltag 100
  • number of queue handles for the output queue appltag 13

This was a surprise, I get more MDBs but fewer output handles used.

Output queue, jmsConnectionFactory, properties.wmqJms  maxPoolDepth

 <jmsConnectionFactory id="COLINMDB" jndiName="COLINMDB"> 
  <connectionManager maxPoolSize="100" </connectionManager>
   <properties.wmqJms maxPoolDepth="7" 

No difference (but it did make a difference when used in the jmsActivationSpec)

wmqJmsClient maxConnections

<wmqJmsClient maxConnections=”997″ nativeLibraryPath=”a/opt/mqm/java/lib64″/>

Changing maxConnections=”997″ to maxConnections=”25″ had no effect.


<executor name=”Default Executor” coreThreads=”150″ id=”default” maxThreads=”496″ />

See here for the description of the executor tag.  Changing coreThreads and maxThreads had no effect

<executor name=”LargeThreadPool” id=”default”…

name=”LargeThreadPool” is mentioned in several blog posts.  Using this had no impact. 

If I used both, I got messages such as Property coreThreads has conflicting values when the server was restarted.  If I changed this dynamically the problem was not detected.

Startup times.

When the server was started, it took 30 seconds from the message

[AUDIT ] CWWKZ0001I: Application MDB started in 3.024 seconds

to the first MDB being started and processing messages.


5 thoughts on “JMS performance and tuning in WAS Liberty

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s