Are your client connections not configured for optimum high availability?.

I would expect the answer for most people is – no, they are not configured for optimum high availability.

In researching my previous blog post on which queue manager to connect to, I found that the the default options for CLNTWGHT and AFFINITY may not be the best. They were set up to provide consistency from a previous release. The documentation was missing words “once you have migrated then consider changing these options”. As they are hard to understand, I expect most people have not changed the options.

The defaults are


I did some testing and found some bits were good, predicable, and gave me High Availability other bits did not.

My recommendations for high availability and consistency are the complete opposite of the defaults:

  • use CLNTWGHT values > 0, with a value which would give you the appropriate load balancing

There are several combination of settings

  • all clients use AFFINITY(NONE) – was reliable
    • CLNTWGHT > 0 this was reliable, and gave good load balancing
    • CLNTWGHT being >= 0 was reliable and did not give good load balancing
  • all clients use AFFINITY(PREFERRED) – was consistent, and not behave as I read the documentation
  • a mixture of clients with AFFINITY PREFERRED and NONE. This gave me weird, inconsistent behavior.

So as I said above my recommendations for high availability are

  • use CLNTWGHT values > 0, and with a value which would give you the appropriate load balancing.

My set up

I had three queue managers set up on my machine QMA,QMB,QMC.
I used channels
QMACLIENT for queue manager QMA,
QMBCLIENT for queue manager QMB,
QMCCLIENT for queue manager QMC.

The channels all had QMNAME(GROUPX)

A CCDT was used

A batch C program does (MQCONN to QMNAME *GROUPX, MQINQ for queue manager name, MQDISC) repeatedly.
After 100 iterations it prints out how many times each queue manager was used.

AFFINITY(NONE) and clntwght > 0 for all channels

  • QMACLIENT CLNTWGHT(50), chosen 50 % on average
  • QMBCLIENT CLNTWGHT(20), chosen 20 % on average
  • QMCCLIENT CLNYWGHT(30), chosen 30 % on average.

On average the number of times a queue manager was used, was the same as channel_weight/sum(weights).
For QMACLIENT this was 50 /(50+20+30) = 50 / 100 = 50%. This matches the chosen 50% of the time as seen above.
I shut down queue manager QMC, and reran the test and got

  • QMACLIENT CLNTWGHT(50), chosen 71 % on average
  • QMBCLIENT CLNTWGHT(20), chosen 28 % on average
  • QMCCLIENT CLNYWGHT(30) not selected.
    For QMACLIENT the weighting is 50/ (50 + 20) = 71%. So this works as expected.

AFFINITY(NONE) for all queue manager and clntwght >= 0

The documentation in the knowledge centre says any channels with CLNTWGHT=0 are considered first, and they are processed in alphabetical order. If none of these channel is available then the channel is select as in the CLNTWGHT(>0) case above.

  • QMACLIENT CLNTWGHT(50) not chosen
  • QMBCLIENT CLNTWGHT(0) % times chosen 100%
  • QMCCLIENT CLNYWGHT(30) not chosen

This shows that the CLNTWGHT(0) was the only one selected.
When CLNTWGHT for QMACLIENT was set to 0, (so both QMACLIENT and QMBCLIENT had CLNTWGHT(0) ), all the connections went to QMA – as expected, because of the alphabetical order.

If QMA was shut down, all the connections went to QMB. Again expected behavior.



and QMA shut down, the connections were in the ratio of 20:30 as expected.

Summary: If you want all connections (from all machines) to go the same queue manager, then you can do this by setting CLNTWGHT to 0.

I do not think this is a good idea, and suggest that all CLNTWGHT values > 0 to give workload balancing.


The documentation for AFFINITY(PREFERRED) is not clear.
For AFFINITY(NONE) it takes the list of clients with CLNTWGHT(0), sorts the list by channel name, and then goes through this list till it can successfully connect. If this fails, then it picks a channel at random depending on the clntwghts.

My interpretation of how PREFERRED works is

  • it builds a list of CLNTWGHT(0) sorted alphabetically,
  • then creates another list of the other channels selected at random with a bias of the CLNTWGHT and keeps that list for the duration of the program (or until the CCDT is changed).
  • Any threads within the process will use the same list.
  • For an application doing MQCONN, MQDISC and MQCONN it will access the same list.
  • With the client channels defined above, for different machines, or different applications instances you may get a list when CLNTWGHT >0 .

For example on different machines, or different application instances the lists may be:

  • QMACLIENT, QMBCLIENT, QMCCLIENT (same as the first one)

I’ll ignore the CLNTWGHT(0) as these would be at the front of the list in alphabetical order.



According to the documentation, if I run my program I would expect 100% of the connections to one queue manager. This is what happened.

If I ran the job many times, I would expect the queue managers to be selected according to the CLNTWGHT.

I ran my program 10 times and in different terminal windows., and each time QMC got 100% of the connections. This was not what was expected!

I changed the QMBCLIENT CLNTWGHT from 20 to 10 and reran my program, and now all of my connections went to QMA!

With the QMBCLIENT CLNTWGHT 18 all the connections went to QMA, with QMBCLIENT CLNTWGHT 19 all the connections went to QMC.

This was totally unexpected behavior and not consistent with the documentation.

I would not use AFFINITY(PREFERRED) because it is unreliable and unpredictable. If you want to connect to the same queue manager specify the channel name in the MQCD and use mqcno.Options = MQCNO_USE_CD_SELECTION.




All of the connections went to QMA.



there was a spread of connections as if the PREFERRED was ignored.

When I tried to validate the results – I got different results. (It may be something to do with the first or last object altered or defined).

Summary: Having a mix of AFFINITY with values NONE and PREFERRED, it is hard to be able to predict what will happen, so this situation should be avoided.

4 thoughts on “Are your client connections not configured for optimum high availability?.

  1. Thank you Colin,
    What if the MQ Client connects to a Qsharing Group, would it be useful to change the deafults for CLNTWGHT and AFFINITY too?


  2. Excellent question Eric.

    I dont have access to a Shared Queue environment at the moment, so I cannot test this out.
    Many customers have a workload balancer or generic port in front of their chinits, so your connection gets workload balanced to a (random-ish) queue manager on z/OS
    What MQ selects from the CCDT table may get overridden by z/OS which makes it hard to predict where your connection will finally go to.

    With the generic port, you effectively have/need one entry in your CCDT, so CLNTWGHT is not relevant. (Unless you are going to an additional QSG))

    If you want to use the queue manager port, and not the generic (shared) port, you should be able to influence which queue manager you want to go to. For example you know that QMA, and QMB are busy with CICS work, by giving the CLNTWGHT for QMC and QMD higher priority over QMA and QMB you can influence where it goes to.

    I still think AFFINITY is not useful in this case.




    1. Hi Colin,

      Thank you for your reaction .
      We will keep on using our distributed vipa ☺.

      With kind regards,

      Erik Dijkerman
      Telefoon +31 30 215 48 78
      Telefoon +31 30 215 14 25


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s