How do I get a client to disconnect?

I had a question from a customer who asked how they can reduce the number of client connections in use.  They had tried setting a disconnect interval (DISCINT) on the channel, but the connections were like weeds – you kill them off, and they grow back again.

DISCINT is “the length of time after which a channel closes down, if no message arrives during that period”.  This sounds perfect for most people.   The application is in an MQGET, and if no messages arrive, the channel can be disconnected, and the application gets connection broken.   The application can then decide to disconnect or reconnect.
If the application is not in an MQGET, then it will get notified of the broken connection next time it tries to use MQ.

Independent applications

Many applications are well written in that when they get Connection Broken, they just reconnect again, and so the DISCINT has no effect on reducing the number of connections. This may be good for availability but not for resource usage.   It may be good to have 1000 application instances running the day, but perhaps not overnight when there is no work to do.   Ive seen instances where the applications do an MQGET every minute, and with 1000 instances this can use a lot of CPU and doing no useful work.  In this case you want unused application instances to stop, and be restarted when needed.

You cannot use triggering with client connections (unless you have a very smart trigger monitor to produce an event which says start a client program over there).

Use automation periodically check the queue depth, and number of input handles. If there is a high queue depth, or a low number of handles(eg 2)  then start more application instances, across your back-end servers.  Your applications can then disconnect if they have not received a message within say 10 minutes.  This should keep the right number of application instances active.

An administrator should be able to get this automation set up, but getting the application to connect could be a challenge, as this requires the application developer to change the code!

Running under a web server

If your applications are running under a web server you may have mis-configured connection pools.  You can specify the initial size of the pool, and this many connections are made.  As more connections are needed, then more can be added to the pool until the pool maximum is reached. You should specify a time out value, so periodically the pool gets cleaned up, and unused connections are removed, until the pool is back to the initial size.  You should review the initial size of the pools ( is it too large), and the value of the time out value.

This should just be an administrative change.

Good luck, you may be successful in reducing the number of client connections, but do not set your hopes too high.

How do I make my MDB transactional?

I found from the application trace  that my MDB was doing MQGET, MQCMIT in the listener, and MQOPEN, MQPUT, MQCLOSE and no MQCMIT in my application.    Digging into this I found that the MQPUT was NO_SYNCPOINT, which was a surprise to me!

My application had session = connection.createSession(true, 1); // true = transactional. So I expected it to work.

The ejb-jar.xml had

enterprise-beans
  message-driven
    transaction-type Container
...
assembly-descriptor
  container-transaction
    trans-attribute NotSupported

I changed NotSupported to Required and it worked.

 

The application trace for the Listener part of the MDB gave me

Operation      CompCode MQRC HObj (ObjName) 
MQXF_XASTART            0000 -
MQXF_GET       MQCC_OK  0000    2 (JMSQ2 )
MQXF_XAEND              0000 -
MQXF_XAPREPARE          0000 -
MQXF_XACOMMIT           0000 -

The trace for the application part of the MDB gave me

Operation                    CompCode MQRC HObj (ObjName)
MQXF_XASTART                             0000         –
MQXF_OPEN             MQCC_OK   0000         2 (CP0000 )
MQXF_PUT                MQCC_OK   0000          2 (CP0000 )
MQXF_CLOSE           MQCC_OK   0000          2 (CP0000 )
MQXF_XAEND                                0000         –
MQXF_XAPREPARE                       0000 –
MQXF_XACOMMIT                        0000 –

and the put options had _SYNCPOINT.

I had read documentation saying that you needed to have XAConnectionFactory instead of ConnectionFactory.  I could not get this work,  but found it was not needed for JMS;  it may be needed for JDBC.

On Weblogic why isnt my MDB scaling past 10 instances?

This is another tale of one step back,  two steps sideways.  I was trying to understand why the JMX data on the MDBs was not as I expected, and why I was not getting tasks waiting.  I am still working on that little problem, but in passing I found I could not get my MDBs to scale.  I have rewritten parts of this post multiple times, as I understand more of it.  I believe the concepts are correct, but the implementation may be different to what I have described.

There are three parts to an MDB.

  1. A thread gets a message from the queue
  2. The message is passed to the application “OnMessage() method of the application
  3. The application uses a connection factory to get a connection to the queue manager and to the send the reply.

Expanding this to provide more details.

Thread pools are used to reuse the MQ connection, as the MQCONN and MQDISC are expensive operations.  By using a thread pool, the repeated MQCONN and MQDISC can be avoided.

There is a specific pool for the application, and when threads are released from this pool, they are put into a general pool.   Periodically  threads can be removed from the general pool, by issuing MQDISC, and then deleting the thread.

Get the message from the queue

The thread has two modes of operation Async consume – or plain old fashioned MQGET.

If the channel has SHARECNV(0) there is a  listener thread which browses the queue, and waits a short period( for example 5 seconds)  for a message.  There is a short wait, so that the thread can take action if required ( for example stop running).  This means if there is no traffic there is an empty MQGET every 5 seconds.   This can be expensive.

If the channel has SHARECNV(>0) then Asyn consume is used.  Internally there is a thread which browses the queue, and multiple threads which can get the message.

The maximum number of threads which can get messages is defined in the ejb-jar.xml activation-config-property-name maxPoolDepth  value.

These threads are in a pool called EJBPoolRuntime.  Each MDB has a thread pool of this name, but from the JMX data you can identify the pool as the JMS descriptor has a descriptor like MessageDrivenEJBRuntime=WMQ_IVT_MDB, Name=WMQ_IVT_MDB, ApplicationRuntime=MDB3, Type=EJBPoolRuntime, EJBComponentRuntime=MDB3/… where my MDB was called MDB3.

The parameters are defined in the ejb-jar.xml file.   The definitions are documented here.  The example below shows how to get from a queue called JMSQ2, and there will be no more than 37 threads able to get a message.

ejb-jar
  enterprise-beans
    message-driven
      activation-config
        activation-config-property>  
          activation-config-property-name maxPoolDepth  
            activation-config-property-value 37
          activation-config-property-name destination 
            activation-config-property-value JMSQ2
 

Note:  I did get messages like the following messages which I ignored ( as I think they are produced in error)

    • <Warning> <EJB> <BEA-015073> <Message-Driven Bean WMQ_IVT_MDB(Application: MDB3, EJBComponent: MDB3.jar) is configured with unknown activation-config-property name maxPoolDepth>
    • <Warning> <EJB> <BEA-015073> <Message-Driven Bean WMQ_IVT_MDB(Application: MDB3, EJBComponent: MDB3.jar) is configured with unknown activation-config-property name destination>

The default value of maxPoolDepth is 10 – this explains why I only had  10 threads getting messages from the queue.

Passing the message to the application for processing.

Once a message is available it will pass it to the OnMessage method of the application. There is some weblogic specific code, which seems to add little value. The concepts of this are

  1. There is an array of handles/Beans of size max-beans-in-free-pool.
  2. When the first message is processed, create “initial-beans-in-free-pool” beans and populate the array invoke the EJBCreate() method of the application.
  3. When a message arrives, then find a free element in this array,
    1. If the slot has a bean, then use it
    2. else allocate a bean and store it in the slot.   This allocation invokes the EJBCreate() method of the application.  On my laptop it took a seconds to allocate a new bean, which means there is a lag when responding to a spike in workload.
    3. call the OnMessage() method of the application.
  4. If all of the slots are in use – then wait.
  5. On return from the OnMessage() flag the entry as free
  6. Every idle-timeout-seconds scan the array, and free beans to make the current size the same as the in initial-beans-in-free-pool.  As part of this the EJBRemove() method of the application is invoked.

The definitions are documented here.

weblogic-ejb
  weblogic-enterprise-bean-jar
    pool    
      max-beans-in-free-pool 47
      initial-beans-in-free-pool 17 
      idle-timeout-seconds 60

I could find no benefit in using this pool.

The default max-beans-in-free-pool is 1000 which feels large enough.  You should make the initial-beans-in-free-pool the same or larger than the number of threads getting messages, see maxPoolDepth above.

If this value is too small, then periodically the pool will be purged down to the initial-beans-in-free-pool and then beans will be allocated as needed.  You will get a periodic drop in throughput.

Note the term max-beans-in-free-pool is not entirely accurate: the maximum number of threads for the pool is current threads in pool + active threads.   The term max-beans-in-free-pool  is accurate when there are no threads in use.

In the JMX statistics data, there is information on this pool.   The data name is likecom.bea:ServerRuntime=AdminServer2, MessageDrivenEJBRuntime=WMQ_IVT_MDB, Name=WMQ_IVT_MDB, ApplicationRuntime=MDB3, Type=EJBPoolRuntime, where WMQ_IVT_MDB is the display name of the MDB, and MDB3 is the name of the jar file.  This allows you to identify the pool for each MDB.

Get a connection and send the reply – the application connectionFactory pool.

The application typically needs to issue an MQCONN, MQOPEN of the reply to queue, put the message, and issue MQDISC before returning.   This MQCONN, MQDISC is expensive so a pool is used to save the queue manager connection handle between calls.  The connections are saved in a thread pool.

In the MDB java application there is code like ConnectionFactory cf = (ConnectionFactory)ctx.lookup(“CF3”);

Where the connectionFactory CF3 is defined in the resource Adapter configuration.

The connectionFactory cf can then be used when putting messages.

The logic is like

  • If there is a free thread in the connectionFactory pool then use it
  • else there is no free thread in the connectionFactory pool
    • if the number of threads in the connectionFactory pool at the maximum value, then throw an exception
    • else create a new thread (doing an MQCONN etc)
  • when the java program issues connection.close(), return the thread to the connectionFactory pool.

It looks like the queue handle is not cached, so there is an MQOPEN… MQCLOSE of the reply queue for every request.

You configure the connectionFactory resource pool from: Home, Deployments, click on your resource adapter, click on the Configuration tab, click on the + in front of the javax.jms, ConnectionFactory, click on the connectionFactory name, click on the Connection Pool tab, specify the parameters and click on the save button.
Note: You have to stop and restart the server or redeploy the application to pick up changes!

This pool size needs to have enough capacity to handle the case when all input threads are busy with an MQGET.

JMX provides statistics with a description like com.bea: ServerRuntime=AdminServer2, Name=CF3, ApplicationRuntime=colinra, Type=ConnectorConnectionPoolRuntime, ConnectorComponentRuntime=colinra_colin where CF3 is the name of the connection pool defined to the resource adapter, colinra is the name I gave to the resource adapter when I installed it, colin.rar is the name of the resource adapter file.

Changing userids

The application connectionFactory pool can be used by different MDBs.  You need to make sure this pool has enough capacity for all the MDBs using it.

If the pool is used by MDBs running with different userids, then when a thread is obtained, it the thread was last used for a different userid, then the thread has to issue MQDISC and MQCONN with the current userid, this defeats the purpose of having a connection pool.

To prevent this you should have a connection pool for MDBs running with the same userid.

Getting a userid from the general pool may have the same problem, so you should ensure your pools have a maxium limit which is suitable for expected peak processing, and initial-beans-in-free-pool for your normal processing.   This should reduce the need to switching userids.

Cleaning up the connectionFactory

When the connectionFactory is configured, you can specify

  • Shrink Frequency Seconds:
  • Shrink Enabled: true|false

These parameters effectively say after after the “Shrink Frequency Seconds”, if the number of threads in the connectionFactory pool is larger than the initial pool size, then end threads (doing an MQDISC) to reduce the number of threads to the  initial pool size.   If the initial pool size is badly chosen you may get 20 threads ending, so there are 20 MQDISC, and because of the load, 20 threads are immediately created to handle the workload.  During this period  there will be insufficient threads to handle the workload, so you will get a blip in the throughput.

If you have one connectionFactory pool being used by a high importance MDB and by a low importance MDB, it could be that the high importance MDB is impacted by this “release/acquire”, and the low priority MDB is not affected.  Consider isolating the connectionFactory pools and specify the appropriate initial pool size.

To find out what was going on I used

  • DIS QSTATUS(inputqueue)  to see the number of open handles.   This is  listener count(1) + current number of threads doing MQGETS, so with maxPoolDepth = 19, this value was up to 20.
  • I changed my MDB application to display the instance number when it was deployed.
    •  ...
       private final static AtomicInteger count = new AtomicInteger(0); 
      ...
      public void ejbCreate() {
        SimpleDateFormat sdftime = new SimpleDateFormat("HH:mm:ss.SSS"); 
        Timestamp now = new Timestamp(System.currentTimeMillis());
        instance = count.addAndGet(1); 
        System.out.println(sdftime.format(now) +":" + this.getClass().getSimpleName()+":EJBCreate:"+instance);
      }
      public void ejbRemove()
      {
        System.out.println(this.getClass().getSimpleName()+":EJBRemove:"+instance 
                           +" messages processed "+messageCount);
        count.decrementAndGet();
      }

This gave me a message which told me when the instance was created, so I could see when it was started.   I could then see more instances created as the workload increased.

07:16:50.520:IVTMDB:EJBCreate:0

  • By using a client connection, I could specify the appltag for the connection pool and so see the number of MQCONNs from the application connectionFactory.

What happens if I get the numbers wrong?

  1. If the input queue is slow to process messages, or the depth is often too high, you may have a problem.
  2. If ejb-jar.xml maxPoolDepth is too small, this will limit the number of messages you can process concurrently.
  3. The weblogic max-beans-in-free-pool is too small. If the all the beans in the pool(array) are busy, consider making the pool bigger.   The requests queue in the listeners waiting for a free MDB instance.   However the JMX data has fields with names like “Wait count”.   In my tests these were always zero, so I think these fields are of no value.
  4. The number of connections in the connectionFactory is too small.  If the number of requests exceeded the pool size the MDB instance got an exception.  MQJCA1011: Failed to allocate a JMS connection.  You need to change the resource adapter definition Max Capacity for the connectionFactory pool size.
  5. If you find you have many MQDISC and MQCONNs happening at the same instance, consider increasing the initial size of the connectionFactory pool.
  6. Make the initial values suitable for your average workload.  This will prevent  the periodic destroy and recreate of the connections and beans.

 

You may want to have more than one weblogic server for availability and scalability.

You could also deploy the same application with a different MDB name, so if you want to stop and restart an MDB, you have another MDB processing messages.

Are all your jms messages persistent?

While debugging my application to see why it was so slow, I found from the MQ activity trace that my replies were all persistent.

The first problem was that by default all jms messages are persistent, so I used

int deliveryMode = message.getJMSDeliveryMode();

to get the persistence of the input message,

and used the obvious code to set the JMSDeliveryMode,

TextMessage response = session.createTextMessage("my reply");
response.setJMSDeliveryMode(deliveryMode);

to set it the same as the input message.  I reran my test and the reply was still persistent.

Eventually I found you need

producer = session.createProducer(dest);
producer.setDeliveryMode(deliveryMode);

And this worked!  It is all explained here.

How do I check?

You can either check your code (bearing in mind that this may be hidden by the productivity tools you use ( Swing, Camel etc)), or turn on activity trace for a couple of seconds to check.

What do I need to make my business applications resilient?

In the same way that a three legged seat needs three strong legs,  a business transaction has three legs.  The business transactions needs
  1. An architecture and infrastructure which can provide the resilence
  2. Application which are well designed, well coded and robust
  3. Operations that can detect problems and automatically take actions to remedy the problems.
If any one is weak, the whole business transaction is not resilient.
For the infrastructure perspective, the question of needing MQ shared queue, MQ midrange or appliance comes down to the requirements of the business and the management of risk.
For your business application you need to understand the impact to your business.  If the application was not available for
  • 1 second
  • 1 minute
  • 1 hour
The cost could be your reputation, rules and regulations of your industry, and financial cost.  For example an outage may cost you 1 million dollars a minute in fines and compensation.  Your reputation could suffer if many people reported problems on twitter if your service is not available.

Overview of availability options.

  1. Queue sharing groups on z/OS give the highest level of availability, with the highest upfront cost (preventing an outage might be worth that cost, and more and more businesses are using QSGs now)
  2. The data replication features in the appliance and replicated data queue managers (RDQM) are the best ways to achieve high availability of queue managers on distributed. See RDQM for HA, and RDQM for Disaster Recovery.
  3. Multi-instance queue managers, where you have an active and a standby queue manager, and clusters can be useful too.
The applications need to be written to be reliable and resilient, so as to:
  1. Not cause an outage, and use MQ (and other software in the stack) as efficiently as possible.  Many “outages” are cause by badly written applications
  2. Deal well with a problem if one occurs.
  3. Make it easy to diagnose any problems that occur
You need to automate your operations so errors are quickly picked up and actioned.
What availability do your business applications need?
You need to be able to handle planned outages.  These may occur once a week.  You stop work going one route, and so it flows via a different route.  Once “all the pipes are empty” you can perform shut down.  This should be transparent to the applications.
You need to be able to handle unplanned outages where messages may be in flight in the queue manager and network.  These may occur once a year.  If there is a problem, messages in flight could be stuck on a queue manager until the queue manager is restarted.  Once a problem is detected, new messages should be able to flow via an alternative route.  In this case a few seconds, or minutes worth of messages could be unavailable.
You can use clustering to automatically route traffic over available channels while a problem in one queue manager is being resolved.
Do you have a requirement for serialized transactions where the order of execution must be maintained?  For example trading stocks and shares.  The price of the second request depends on the trade of the first request.   If so, this means you can only have one back end server, no parallelism, and one route to the back end.  This does not provide a robust solution.
How smart are your applications?
If your application gets no reply within 1 second, the application could try resending the request, and it may take a different route through the network, and succeed.  For inquiry transactions, a duplicate request should have little impact.  For an update requests, the applications need logic to handle a possible duplicate request, where it detects the request has already been processed, and a negative response is sent back.
The business application may need a program to clear up possible unprocessed, duplicate,  responses and take compensating action.
Having smart applications which are resilient means the infrastructure does not need to be so smart.
Operational maturity
For the best reliability and availability you need a mature operations environment.
The infrastructure is usually reliable.  “outages” usually occur because of human intervention,  a change, or a bad application.  For example an application can continually try a failing connection, and fill up the MQ error logs.
Examples of operational maturity include
  1. Do not make a change to two critical systems at once, have a day between changes.
  2. Make sure every change has a back-out procedure which has been tested.
  3. You monitor the systems, so you can quickly tell if there is abnormal behavior.
It can take several minutes to detect a problem, shut down, and restart a queue manager (perhaps in a different place).
If you have 100 linux servers to support, it takes a lot of work to make changes on all of these servers (from making a configuration change to applying fixes).  It may be less work on z/OS.
You need to make sure that the infrastructure has sufficient capacity, and a queue manager is not short of CPU, nor has long disk response time.
Below are several configurations and configurations:
Shared queue across multiple machines, across sites
A message in a Queue Sharing Group can be processed by any queue manager in a QSG,  providing high availability.
Good for business transactions where
  1. You cannot have messages “paused” for minutes while a server is restart.
  2. You can tolerate a “pause” a few seconds if one QM in the QSG goes down, and the channel restarts to a different queue manager in the QSG.
  3. Your applications are not smart.
  4. There is a need for serialized message processing.
  5. The cost of an outage would cover the cost of running z/OS.
Multiple mid-range machines configured across multiple machines across sites (RDQM).  Use of MQ appliance
For business transactions where
  1. Messages can be spread across any of the servers to provide scalability and availability.
  2. If you have a requirement for short response time, you need smart applications which can retry sending the message and handle duplicate requests and responses.
  3. If you can tolerate waiting for in-flight message whilst a queue manager is restarted, the applications do not need to be so smart.

These mid-range systems, can take a minute or so to restart after an outage.

RDQM queue managers are generally better than Multi Instance queue managers. See the performance report here.

Single server
This is a single point of failure, and not suitable for production work.
Your enterprise may have combinations of the above patterns.

You need to consider each business application and evaluate the risk.

For example

  • My applications are not smart. They are running on mid-range with 2 servers.  If I  had an unplanned outage which lasted for 5 minutes then with my typical message volumes, this means I could have 6000 requests stuck until queue manager was restarted.  My management would not be happy with this.
  • If I had an outage on these two servers…  ahh that would be a problem.  I need more servers.

Many thanks to Gwydion of IBM for his comments and suggestions.

How long will it take my queue manager to fail over and restart on midrange?

Following on from my blog post and making sure your file systems are part of a consistency group – so the data is consistent after a restart, the next question is

“how long will it take to fail over?”.

There are two areas you need to look at

  1. The time to detect and outage.  This can be broken down into the time the active queue manager releases the lock, and the time taken for the release of the lock to be reflected to the standby system.  You need to test and measure this time.  For example you may need to adjust your network configuration.
  2. The time taken to restart the queue manager.  There is an excellent blog post on this from Ant at IBM.
    1. The blog post talks about the rate at which clients can connect to MQ.  Yes MQ can support 10,000 client connections.  But if it takes a significant time to reconnect them all, you may want multiple queue managers, and have parallelism
    2. Avoid deep queues.  In my time at IBM I saw many customers with thousands of messages on a queue with an age over 1 year old!  You need to clean up the queue.  Your applications team should have a process that runs perhaps once a day which cleans up all old messages.  For example there was a getting application instance which timed out and terminated, then the reply arrived.
    3. During normal running most of the IO is write – and this often goes into cache, so very fast.   During recovery the IO is reading from disk which might be from rotating disks rather than solid state.

One lesson from this is you need to test the recover.   You need to test a realistic scenario – take the worst case of number of connections, with peak throughput, and then pull the plug on the active system.

Another lesson is you need to do this regularly – for example monthly as configurations can change.

Are your mirrored file systems consistent?

It started with a question “Several years ago you told us about checking your MQ disks are consistent,  can you provide us with a link to any documentation please?”.

I’ll explain why this is important and what you need to do to ensure you have data integrity and you do not lose data integrity when you go to a backup site.

With some applications that write to multiple files, the order that data is actually written to the disk does not matter.  For example when you print data, it often stays in a buffer, and is written out when the buffer is full.

A transaction manager

With programs that handle transactions (a transaction manager) it is critical that writes to disk are done in the order they are issued.  If the writes are not in the correct order then if there if the system crashes and tries to restore the transaction the recovery may be missing key data  (“it has taken the money from your account..  it cannot see who should get the money?”) and so data integrity is lost.

With local disks, the sequence is

  • Write to file1,
  • Wait for confirmation that the IO has completed
  • Write to file2,
  • Wait for confirmation that the IO has completed

Consider the case where file1 and file 2 are on different file systems.  For example file1 could be transaction log, file2 could be queue data.  (Picture file system1 on slow disks, and file system 2 on fast disks – so IO for file 2 is faster than IO to file 1).

With mirrored disks with synchronous replication, the sequence is

  • Write to file1 local copy; send data to remote site,  write to file1, send back OK when completed
  • Wait for confirmation that both IOs have completed
  • Write to file2 local copy,send data to remote site,  write to file2, send back OK when completed
  • Wait for confirmation that both IOs have completed

With synchronous replication the two locations need to be within 10s of kilometers.  The response time of the file write depends on the distance.

With Asynchronous replication the two locations can be 100s of kilometers apart.

In this case the sequence is

  • Write to file1 local copy; send data to remote site,  write to file1, send back OK when completed
  • Wait for confirmation that the local IO has completed.
  • Write to file2 local copy,send data to remote site,  write to file2, send back OK when completed
  • Wait for confirmation that the local IO has completed.

The disk subsystems manages the responses coming back from the remote end.

For capacity reasons there are usually multiple paths between the two sites.  It is possible that the data for file 2 gets there before the data for file1.  If the writes are done in the wrong order, this could be bad news.

Consistency group

The architecture of the mirroring systems have the concept of a consistency group.   You define one or more consistency groups.  You put file systems into a consistence group.  For any files in the consistency group the write order will be honoured.  So in the case above, if the two files are in the same consistency group, it will wait, write the data to file 1, then write to file 2.  This gives a solution with data integrity.

The lurking problem.

Someone needs to define the file systems to each consistency group.   The storage manager may have said

  • “all file systems are part of one consistency group”.
  • “”production data is in one consistency group, test data is in another consistency group”
  • “I’ll guess, and hope people tell me their requirements”

How will I know if I have a problem?

The sure fire way of finding out if you have a problem is to lose a site ( for example a power outage).  For 99 times out of a 100 it may be fine, and then one time in a hundred, you find you cannot restart your systems on the other site.  This is clearly the wrong time to find out.

Check with your storage administrator and give them information about the file systems that need to be part of the same consistency group.

Practice your fail over – perhaps weekly – at least monthly.