This is another tale of one step back, two steps sideways. I was trying to understand why the JMX data on the MDBs was not as I expected, and why I was not getting tasks waiting. I am still working on that little problem, but in passing I found I could not get my MDBs to scale. I have rewritten parts of this post multiple times, as I understand more of it. I believe the concepts are correct, but the implementation may be different to what I have described.
There are three parts to an MDB.
- A thread gets a message from the queue
- The message is passed to the application “OnMessage() method of the application
- The application uses a connection factory to get a connection to the queue manager and to the send the reply.
Expanding this to provide more details.
Thread pools are used to reuse the MQ connection, as the MQCONN and MQDISC are expensive operations. By using a thread pool, the repeated MQCONN and MQDISC can be avoided.
There is a specific pool for the application, and when threads are released from this pool, they are put into a general pool. Periodically threads can be removed from the general pool, by issuing MQDISC, and then deleting the thread.
Get the message from the queue
The thread has two modes of operation Async consume – or plain old fashioned MQGET.
If the channel has SHARECNV(0) there is a listener thread which browses the queue, and waits a short period( for example 5 seconds) for a message. There is a short wait, so that the thread can take action if required ( for example stop running). This means if there is no traffic there is an empty MQGET every 5 seconds. This can be expensive.
If the channel has SHARECNV(>0) then Asyn consume is used. Internally there is a thread which browses the queue, and multiple threads which can get the message.
The maximum number of threads which can get messages is defined in the ejb-jar.xml activation-config-property-name maxPoolDepth value.
These threads are in a pool called EJBPoolRuntime. Each MDB has a thread pool of this name, but from the JMX data you can identify the pool as the JMS descriptor has a descriptor like MessageDrivenEJBRuntime=WMQ_IVT_MDB, Name=WMQ_IVT_MDB, ApplicationRuntime=MDB3, Type=EJBPoolRuntime, EJBComponentRuntime=MDB3/… where my MDB was called MDB3.
The parameters are defined in the ejb-jar.xml file. The definitions are documented here. The example below shows how to get from a queue called JMSQ2, and there will be no more than 37 threads able to get a message.
Note: I did get messages like the following messages which I ignored ( as I think they are produced in error)
- <Warning> <EJB> <BEA-015073> <Message-Driven Bean WMQ_IVT_MDB(Application: MDB3, EJBComponent: MDB3.jar) is configured with unknown activation-config-property name maxPoolDepth>
- <Warning> <EJB> <BEA-015073> <Message-Driven Bean WMQ_IVT_MDB(Application: MDB3, EJBComponent: MDB3.jar) is configured with unknown activation-config-property name destination>
The default value of maxPoolDepth is 10 – this explains why I only had 10 threads getting messages from the queue.
Passing the message to the application for processing.
Once a message is available it will pass it to the OnMessage method of the application. There is some weblogic specific code, which seems to add little value. The concepts of this are
- There is an array of handles/Beans of size max-beans-in-free-pool.
- When the first message is processed, create “initial-beans-in-free-pool” beans and populate the array invoke the EJBCreate() method of the application.
- When a message arrives, then find a free element in this array,
- If the slot has a bean, then use it
- else allocate a bean and store it in the slot. This allocation invokes the EJBCreate() method of the application. On my laptop it took a seconds to allocate a new bean, which means there is a lag when responding to a spike in workload.
- call the OnMessage() method of the application.
- If all of the slots are in use – then wait.
- On return from the OnMessage() flag the entry as free
- Every idle-timeout-seconds scan the array, and free beans to make the current size the same as the in initial-beans-in-free-pool. As part of this the EJBRemove() method of the application is invoked.
The definitions are documented here.
I could find no benefit in using this pool.
The default max-beans-in-free-pool is 1000 which feels large enough. You should make the initial-beans-in-free-pool the same or larger than the number of threads getting messages, see maxPoolDepth above.
If this value is too small, then periodically the pool will be purged down to the initial-beans-in-free-pool and then beans will be allocated as needed. You will get a periodic drop in throughput.
Note the term max-beans-in-free-pool is not entirely accurate: the maximum number of threads for the pool is current threads in pool + active threads. The term max-beans-in-free-pool is accurate when there are no threads in use.
In the JMX statistics data, there is information on this pool. The data name is likecom.bea:ServerRuntime=AdminServer2, MessageDrivenEJBRuntime=WMQ_IVT_MDB, Name=WMQ_IVT_MDB, ApplicationRuntime=MDB3, Type=EJBPoolRuntime, where WMQ_IVT_MDB is the display name of the MDB, and MDB3 is the name of the jar file. This allows you to identify the pool for each MDB.
Get a connection and send the reply – the application connectionFactory pool.
The application typically needs to issue an MQCONN, MQOPEN of the reply to queue, put the message, and issue MQDISC before returning. This MQCONN, MQDISC is expensive so a pool is used to save the queue manager connection handle between calls. The connections are saved in a thread pool.
In the MDB java application there is code like ConnectionFactory cf = (ConnectionFactory)ctx.lookup(“CF3”);
Where the connectionFactory CF3 is defined in the resource Adapter configuration.
The connectionFactory cf can then be used when putting messages.
The logic is like
- If there is a free thread in the connectionFactory pool then use it
- else there is no free thread in the connectionFactory pool
- if the number of threads in the connectionFactory pool at the maximum value, then throw an exception
- else create a new thread (doing an MQCONN etc)
- when the java program issues connection.close(), return the thread to the connectionFactory pool.
It looks like the queue handle is not cached, so there is an MQOPEN… MQCLOSE of the reply queue for every request.
You configure the connectionFactory resource pool from: Home, Deployments, click on your resource adapter, click on the Configuration tab, click on the + in front of the javax.jms, ConnectionFactory, click on the connectionFactory name, click on the Connection Pool tab, specify the parameters and click on the save button.
Note: You have to stop and restart the server or redeploy the application to pick up changes!
This pool size needs to have enough capacity to handle the case when all input threads are busy with an MQGET.
JMX provides statistics with a description like com.bea: ServerRuntime=AdminServer2, Name=CF3, ApplicationRuntime=colinra, Type=ConnectorConnectionPoolRuntime, ConnectorComponentRuntime=colinra_colin where CF3 is the name of the connection pool defined to the resource adapter, colinra is the name I gave to the resource adapter when I installed it, colin.rar is the name of the resource adapter file.
The application connectionFactory pool can be used by different MDBs. You need to make sure this pool has enough capacity for all the MDBs using it.
If the pool is used by MDBs running with different userids, then when a thread is obtained, it the thread was last used for a different userid, then the thread has to issue MQDISC and MQCONN with the current userid, this defeats the purpose of having a connection pool.
To prevent this you should have a connection pool for MDBs running with the same userid.
Getting a userid from the general pool may have the same problem, so you should ensure your pools have a maxium limit which is suitable for expected peak processing, and initial-beans-in-free-pool for your normal processing. This should reduce the need to switching userids.
Cleaning up the connectionFactory
When the connectionFactory is configured, you can specify
- Shrink Frequency Seconds:
- Shrink Enabled: true|false
These parameters effectively say after after the “Shrink Frequency Seconds”, if the number of threads in the connectionFactory pool is larger than the initial pool size, then end threads (doing an MQDISC) to reduce the number of threads to the initial pool size. If the initial pool size is badly chosen you may get 20 threads ending, so there are 20 MQDISC, and because of the load, 20 threads are immediately created to handle the workload. During this period there will be insufficient threads to handle the workload, so you will get a blip in the throughput.
If you have one connectionFactory pool being used by a high importance MDB and by a low importance MDB, it could be that the high importance MDB is impacted by this “release/acquire”, and the low priority MDB is not affected. Consider isolating the connectionFactory pools and specify the appropriate initial pool size.
To find out what was going on I used
- DIS QSTATUS(inputqueue) to see the number of open handles. This is listener count(1) + current number of threads doing MQGETS, so with maxPoolDepth = 19, this value was up to 20.
- I changed my MDB application to display the instance number when it was deployed.
This gave me a message which told me when the instance was created, so I could see when it was started. I could then see more instances created as the workload increased.
- By using a client connection, I could specify the appltag for the connection pool and so see the number of MQCONNs from the application connectionFactory.
What happens if I get the numbers wrong?
- If the input queue is slow to process messages, or the depth is often too high, you may have a problem.
- If ejb-jar.xml maxPoolDepth is too small, this will limit the number of messages you can process concurrently.
- The weblogic max-beans-in-free-pool is too small. If the all the beans in the pool(array) are busy, consider making the pool bigger. The requests queue in the listeners waiting for a free MDB instance. However the JMX data has fields with names like “Wait count”. In my tests these were always zero, so I think these fields are of no value.
- The number of connections in the connectionFactory is too small. If the number of requests exceeded the pool size the MDB instance got an exception. MQJCA1011: Failed to allocate a JMS connection. You need to change the resource adapter definition Max Capacity for the connectionFactory pool size.
- If you find you have many MQDISC and MQCONNs happening at the same instance, consider increasing the initial size of the connectionFactory pool.
- Make the initial values suitable for your average workload. This will prevent the periodic destroy and recreate of the connections and beans.
You may want to have more than one weblogic server for availability and scalability.
You could also deploy the same application with a different MDB name, so if you want to stop and restart an MDB, you have another MDB processing messages.