Some secrets of shared conversations and other dark corners of MQ

I was looking into how to balance the number of server threads processing messages, and discovered I knew nothing about shared conversations and related topics. Of course I could draw them on a white board and wave my hands around, but I could not actually describe how they work.

Firstly some things I expect every one knows (except for me).

  1. You can define a shared connection handle. This can be used in different threads – but only serially. See Shared(thread independent) connections with MQCONNX.
  2. A thread can only connect to MQ once using a non shared connection, otherwise you get MQRC_ALREADY_CONNECTED: “A thread can have no more than one nonshared handle.”
  3. A non shared connection cannot be shared between threads. I got MQRC_HCONN_ERROR: “The handle is a nonshared handle that is being used a thread that did not create the handle”.

Multi threaded program

I set up a program which did

do I = 1 to number of threads;
pthread_create – use subroutine
end

The subroutine did

MQCONNX
MQCB (set up MQCB to get queue manager change events such as reconnect)
MQOPEN…

Each thread needed its own MQCONN, and MQCB to capture queue manager events such as disconnect request, and reconnected events.

DIS CHSTATUS shows conversations spread across channels

My CLNTCONN channel was defined with SHARECNV(10). I started my program and specified 15 threads. DIS CHS(COLIN) gave me two channel instances:

 AMQ8417I: Display Channel Status details.
CHANNEL(COLIN) CHLTYPE(SVRCONN)
CONNAME(127.0.0.1) CURRENT
STATUS(RUNNING) SUBSTATE(RECEIVE)
CURSHCNV(5)

AMQ8417I: Display Channel Status details.
CHANNEL(COLIN) CHLTYPE(SVRCONN)
CONNAME(127.0.0.1) CURRENT
STATUS(RUNNING) SUBSTATE(RECEIVE)
CURSHCNV(10)

One channel instance had CURurrent SHared CoNVersations (CURSHCNV) of 5, the other had 10. 5+10 = 15 was the number of threads I had running in my program. With 25 threads, I had three channels active and a total of 25 CURSHCNV.

When running my program the value to DIS QMSTATUS CONNS increased by 25, the number of threads I had running.

Morag wrote a post onMaxChannels vs DIS QMSTATUS CONNS.

Things that didn’t work

I tried to issue one MQCONN, and share the connection within the threads – this did not work as it gave me MQRC_HCONN_ERROR: a non shared thread cannot be shared between threads.

This error MQRC_HCONN_ERROR: a non shared thread cannot be shared between threads is not entirely true.

I use an MQCB to get notified about queue manager events. You specify MQCB and pass the hConn. In my MQCB routine, I could issue MQINQ using the same hConn. So I did have the same hConn being used by different threads – but one of these is a special thread.

I tried to use Async Consume, where you use MQCB to specify a message handler program to process the message when a message arrives. You do MQCONN, and then the hConn is used by the asynchronous process. The hConn cannot be used by other MQ API requests or a second Async Get. In my main program I tried to issue 15 MQCONN, and use one hConn for each Async get. I got MQRC_ALREADY_CONNECTED “A thread can have no more than one nonshared handle.”

I solved this by the same technique as above

do I = 1 to number of threads; 
pthread_create – use subroutine
end
subroutine: use Async Consume.
MQCONN
MQCB for queue manager events
MQCB for Async Consume

I had an email exchange with Morag (thank you) who said

You can have one MQCONN and 15 async getters if you want, if you use the shared handle connection option. (cno.Options … + MQCNO_HANDLE_SHARE_BLOCK)

Only one Async Callback function (and thus one message and application logic) can be processed at a time. One connection equals one channel (or conversation over a channel if you are sharing them – i.e. SHARECNV > 1).
Equally you have have 15 MQCONNs and association each MQCB with a different hConn.
It all depends what sort of concurrency you want in your application. Do you want parallel processing because your workload is heavy, or do you just want to monitor and process 15 different, lightly used queues in the simplest way possible?
If an hConn is currently in use by one callback call, another will not be invoked until the first callback completes.

So if you have an Async consumer for queue1, and an Async consumer for queue2, and a message arrives on each queue, it will work as follows

  • Async code for queue1 is invoked with the message, it does a database update, and an MQPUT1 to the reply-to queue. This application returns.
  • only after the previous code has returned, can the Async code for queue2 be invoked; which does a database update, and an MQPUT1 to the reply-to queue, and returns.

It is not worth having more than one Async consumer per queue, as you will not get parallel processing. You will get

  • Wait until previous consumer to finish, do Async consumer 1 for queue … return;
  • Wait until previous consumer to finish, do Async consumer 2 for same queue … return;

You might just as well have one Async consumer per queue.

As Morag said It all depends what sort of concurrency you want in your application. Do you want parallel processing because your workload is heavy, or do you just want to monitor and process 15 different, lightly used queues in the simplest way possible?

With one application and 15 Async consumers set up, DIS CHS(..) gave me CURSHCNV(1).

What does SHARECNV on a svrconn channel do?

On QMA, I changed SHARECNV(10) to SHARECNV(0). When QMA was the only queue manager running, I got

rc 2012 (07dc) MQRC_ENVIRONMENT_ERROR.

The reason is An MQ client application that has been configured to use automatic reconnection attempted to connect using a channel defined with SHARECNV(0).

When I had both QMA and QMC running, there was a couple of second delay during which the threads connected to QMA, got back MQRC_ENVIRONMENT_ERROR, tried to connect to QMC – and succeeded. There were no error messages in /var/mqm/errors/AMQERR01.LOG to tell me there was a problem in QMA.

On QMA, I changed SHARECNV(10) to SHARECNV(1). When QMA was the only queue manager running, I got 10 channel instances of COLIN running, each with CURSHCNV(1), as expected.

I changed the svrconn channel and specified SHARECNV(30), and used 30 threads. I got 3 channel instances each with 10 connections. This was a surprise to me.

This page says If the CLNTCONN SHARECNV value does not match the SVRCONN SHARECNV value, the lower of the two values is used.

I was using the ccdt in json and added the sharingConversations to the ccdt.

"connectionManagement":
{
"sharingConversations": 30,
},

"name": "COLIN",
"clientConnection":...

When I restarted my application and specified 30 threads, I had one channel started with DIS CHS… giving CURSHCNV(30).

The Knowledge Centre says Use SHARECNV(1). Use this setting whenever possible. It eliminates contention to use the receiving thread, and your client applications can take advantage of new features. So although you can make SHARECNV large, a value of 10 or 1 may best. It is a balance between having more connections which use more resources, and the impact of sharing a channel on channel throughput.

Uniform Clusters and shared conversations.

I started up 8 threads, and had one channel with 8 conversations on it. I had an MQCB to report when the conversation balancing occurred: that is when a conversation got disconnected and reconnected.

At start up all conversations connected to QMA. Over time, some conversations moved to QMC.

Eventually, I had

  • one channel instance to QMA with CURSHCNV(4) and
  • one channel instance to QMC with CURSHCNV(4)

So even with shared conversations you get balancing across channels.

How to start more servers on midrange

I came upon this question when looking into the new Uniform Clustering support in V9.1.2.

5 years ago, a common pattern was to have a machine, containing a front end web server, MQ, and back end servers (in bindings mode), processing the requests, going to a remote database. For this to do more work, you increase the number of servers, and perhaps add more CPUs to the machine.

These days you have MQ in its own (virtual) machine, and the front end web server in its own (virtual) machine connected to MQ over a client interface, with the server application in its own (virtual) machine connected to MQ over a client interface, and going to a remote database.

To scale this, you add more MQ machines, or more servers machines. In my view this solves some administration problems, but introduces more problems – but this is not today’s discussion.

Given this modern configuration, how do you start enough servers to manage the workload?

Consider the scenario where you have MACHINEMQ with the queue manager on it, MACHINEA and MACHINEB with the server applications on it.

Having “smarts in the application”

  1. You want enough servers running, but not too many. (Too many can flood the downstream processes, for example cause contention in a database. Using MQ as a throttle can sometimes improve overall throughput).
  2. If a server thread is not doing any work, then shut it down
  3. If there is a backlog then start more instances of the server threads.

In the server application you might have logic like

MQINQ curdepth, ipprocs.

If( curdepth > X & number of processes and number of processes with queue open for input(ipprocs) < Y then

{

do_something.

}

If get_wait timed out and IPPROCS > 2 then return and free up the session.

For CICS on z/OS, it was easy; do_something was “EXEC CICS START TRAN…”

When running on Unix the “do_something” is a bit harder.

My first thoughts were…

It is not easy to create new processes to run more work.

  1. You can use spawn to do this – not very easy or elegant.
  2. I next thought the application instances could create a trigger message and so a trigger monitor could run and start more processes. This means
    1. Unless you are really clever, the trigger monitor starts a process on its local machine. So running a trigger monitor on MACHINEA, would create more processes on MACHINEA.
    2. This means you need a trigger monitor on MACHINEA and MACHINEB.
    3. If you put a trigger message, the message may always go to MACHINEA, always go to MACHINEB, or go to either. This may not help if one machine is overloaded and gets all of the trigger messages.
  3. I thought you could have one process and lots of threads. I played with this, and found out enough to write another blog post. It was difficult to increase the number of threads dynamically. I found it easiest to pass in a value for the number of threads to the application, and not try to dynamically change the number of threads.
  4. The best “do_something” was to produce an event or alert and have automation start the applications. Automation should have access to other information, so you can have rules such as “Pick MACHINEA or MACHINEB which has the lowest CPU usage over the last 5 minutes – and start the application there”

And to make it more complex.

Today’s scenario is to have multiple queue manager machines, for availability and scalability, so now you have to worry about which queue manager you need to connect to, as well as processing the messages on the queue,
MQ 9.1.2 introduced Uniform Clustering which balances the number of client channel connections across queue manager servers, and can, under the covers, tell an application to connect to a different queue manager.

This should make the balancing simpler. Assuming the queue managers are doing equal amounts of work, you should get workload balancing.

Notes on setting up your server.

You need to be careful to define you CCDT with CLNTWGHT. If CLNTWGHT is 0, then the first available queue manager in the list is used, so all your connects would go to that queue manager. By making all CLNTWGHT > 0, you can bias which queue manager gets selected.

Thanks to Morag for her help in developing this article.