IBM Blog 2017 July

Should I set up page set expansion – and how should I do it.?

July 28 2017 ‎

This comes under the general performance question – so like most performance questions the answer is – it depends.

Why would I want to have my page set expand?

When you first set up your system you may not have know the size you needed to handle peak workload – and the workload may have changed since the page set was created. You can have it expand as it gets full. You may be told we now need to keep messages for twice as long if there is a problem – for example 4 hours.

Why would I not want my page set to expand?

Once the page set has expanded it is not easy to make it smaller. It can take longer to backup a large page set compare to a small page set – but it depends on how you back it up! See

How do I set the page set expansion options?

Use the ALTER PSID command

  • EXPAND(NONE) No further page set expansion is to take place.
  • EXPAND(USER) – The secondary extent size that was specified when the page set was defined is used. If no secondary extent size was specified, or if it was specified as zero, then no dynamic page set expansion can take place.
  • EXPAND(SYSTEM). A secondary extent size that is approximately 10 per cent of the current size of the page set is used. It might be rounded up depending on the characteristics of the DASD. Even if no secondary space was specified for the page set, you can will allow it to expand by using this option

If you realize that EXPAND(USER) is not going to be enough space you can dynamically change it to EXPAND(SYSTEM).

What does this mean?

If you allocate a primary extent of 1000 cylinders and a secondary extent of 1 cylinder. When the page set starts to get full, an internal task will start page set expansion

  • EXPAND(SYSTEM) will allocate a secondary extent of 100 cylinders and use it. If this fills up then it will allocate another extent of 101 cylinders
  • EXPAND(USER) will allocate a secondary extent of 1 cylinder, if this fills up then another 1 cyl extent will be created

This looks pretty simple – is that all ?

No. See here

The number of extents used to be 123. It now depends on how many volumes are available.

It used to be that if you only specified one volume in your definitions – then the page set could only use this single volume.

If multiple volumes are specified as part of the page set definition, you can use guaranteed space to make sure there is enough space available on secondary volumes. SMS will allocate space equal to the primary extent on the secondary volumes. Page set expansion can expand into this and only use part of the extent as it needs. So MQ may expand 10 times but from an SMS perspective only two physical extents are used.

It now looks more complex- is this it?

No. If you are using SMS, then the definitions used to create your page set may be overridden by SMS. You need to use the LISTCAT ENT(….) ALL command to see all of the information.

Even though you asked for a primary extent of 1000 – you may find your allocation is different. For example SMS could not allocate 1000 cylinders in one block, so it allocate two extents – one of 600 Cylinders and one of 400 Cylinders, so you need to be careful interpreting the output of the listcat !

Is that finally it ?

No – Good luck – that’s it.

Should I use MQ log compression or let HSM do it?

July 16 2017

I had a question from a customer who wanted to keep MQ archive logs for a year (for audit and problem determination purposes).
Should they use MQ log compression (SET LOG COMPLOG or CSQ6LOGP COMPLOG)?

The short answer is – let HSM compress it when the data set is migrated.

The log compression function was implemented when disks were much slower, and active logs were unable to keep up with the required throughput. These days DASD is much faster.

If your archive logs are migrated then HSM can compress them better than MQ can. The migrated archive are smaller than the original data sets. You can use the HSM LIST DSN(…) command to show the original space, and the current space used.

Should I use trigger first or trigger every for a high thoughput queue?

July 14 2017


Below I describe a CICS transaction – but it applies to other environments and distributed just as well.

Trigger every or trigger first and only process one messages, are suitable for transaction rates below 1 message a second.

With a shared queue you will get a trigger messages in each LPAR, so you should have one transaction for each queue manager

Rather than have a triggered transaction that processes just one message, the transaction should process multiple messages.

A better CICS application does the following

Do i = 1 to 1000
MQGET wait for 1 second
if no messages found then exit
application logic and database update
MQPUT1 of reply

It will be triggered if there are more messages.

The do for 1000 means the transaction stops and is restarted. This allow CICS (CPSM) to restart it on a different CICS and so balance the load on CICS.

This application will perform better than the application which only processes one message because it avoids

  1. the cost of generating a trigger message for each application message
  2. the cost of the trigger monitor getting the message
  3. the cost of the transaction startup and shut down
  4. The cost of 999 MQOPEN and MQCLOSEs

An even better application does

Do I =1 to 1000
MQGET wait for 1 second
if no messages found then exit
application logic and database update
MQPUT1 of reply

// start another instance if the queue depth is too high
// and we have less than the number of instances specified
MQINQ queue depth and number of input handles
If number of input handles < X & queue depth > Y

// restart myself

Why is my xmit queue filling up?

July 14 2017

I was at a customer and they were discussing how to monitor an xmit queue and what to do if it fills up.

The first thing to do is to enable MONCHL() on the channel for example MONCHL( MEDIUM) or MONCHL(HIGH). This collects information about channel activity.
For the xmit queue you need MONQ(Medium) or MONQ(HIGH), then restart the channel.

I’ll discuss a point to point sender channel, then cover cluster sender channels.

If you do not know the channel name use
DIS CHL(*) where(XMITQ,EQ,myxmitq)

What is the oldest message on the queue

For the xmit queue use DIS QSTATUS(xmitq) TYPE(QUEUE) this will report the MSGAGE – the age of the longest message on the queue in seconds.

If this is small then your XMITQ is deep because a lot of messages were just put to it. If it is long ( eg 10 seconds) message are being delayed.

If you display the MSGAGE, wait for a period and display the MSGAGE again. If the newer MSGAGE = old MSGAGE + interval then the oldest message is the same, if not then there is a different oldest message. This lets you know whether any messages on the xmitq are being processed.

For example

at time 00:00:00 MSGAGE is 40
at time 00:01:00 MSGAGE is 100 – then this is the same message ( old age 40 + 60 second interval)

at time 1:00:00 MSGAGE is 40
at time 01:01:00 MSGAGE is 70 then the oldest message is is a different message as 40 + 60 > 70.

Looking at the channel

For the channel with the problem issue DIS CHS(..) ALL, wait for a short period for example 1 second, and reissue the command
This give output like (I’ve picked the fields of interest)

AMQ8417: Display Channel Status details.

Key things look at

If no channel status information is produced – the channel has not been started

  • Status
  1. it should be RUNNING
  2. if it is PAUSED then the put at the remote end had a problem – so investigate the far end only for Receiver channel ( thank you Morag)
  3. any other status means the channel has not started so resolve the problems
  • If MSGS() is increasing then messages are being processed.
  • BATCHSZ is the maximum number of messages in a batch. The default of 50 is a minimum. Larger batch sizes are usually more efficient. In some situations on z/OS we have used BATCHSZ(1000). The value depends on the reliability of the network – for unreliable networks a small value is better.
  • XBATCHSZ is the rolling average of the achieved batch size. If XBATCHSZ is close to BATCHSZ then it is likely that some of your batches were full – and there were more message on the queue waiting to be sent. Consider increasing the BATCHSZ at each end to improve the throughput. You may also need to increase BATCHLIM on the channel definition as this controls the maximum amount of data transferred in a batch.
  • XQTIME is the time, in microseconds, that messages remained on the transmission queue before being retrieved. This should be small typically under 1-2 second (1000000 microseconds). Longer than this and the messages are being delayed
  • If a channel has not been running, and is restarted. and there were message on the xmit queue, then the XQTIME will be large – and gradually come down as messages are processed. This value is calculated every N’th message where N depends on the value of MONCH..
  • NETTIME is a measure of the time on the network for the end of batch flow to go to the remote end, and the response to come back. Note if the remote end is slow to process messages, the data will be held in TCP buffers until the channel receives the data – so a large nettime can be caused by a network delay OR a problem at the remote end.

You need to have a value from a good day to be able to compare with a bad day.

What is the data rate?

One good measure is to measure the rate the channel processed data. Issue the DIS CHS command twice. From the bytes sent for sender channel (or bytes received for a receiver channel) and the time between the two commands, you can calculate the data rate over the channel for example in MB/Second. If you plot this over time you get a profile, and this may indicate if the throughput plateau’s

With the system cluster transmit queue (SCTQ), most of the above applies. It is complicated because there can be many channels processing messages from the queue.
The information for channels still works, the information about the queue is harder to use

If one channel has stopped, messages may accumulate for that channel. This means the depth of the SCTQ will increase due to messages for this channel.
The age of the oldest message will be for the stopped channel. If you monitor the MSGAGE for the SCTQ.

  • If it is small ( under 2 seconds) there is no problem.
  • If it is large then this indicates that a channel may not be processing messages, but you cannot tell which channel.

It is good to have standards for your object names. Is is easier to issue DIS CHS(CL*) for all CLuster channels, than issue DIS CHS(PAYROLL_CL*) and DIS CHS(INQUIRY_*) etc

What do you automate?

If your automation produces a queue high event for a transmit queue

Issue DISPLAY QSTATUS(xmitq) TYPE(QUEUE) and check the MSGAGE
If MSGAGE is small then
{Report ” high queue depth – but low message age”
wait 2 seconds and repeat DISPLAY QSTATUS(xmitq) TYPE(QUEUE)
Issue DIS CHS(… ) command.
Check the number of responses you get back is the number of channels should have running. ( one for point to point – N for cluster channels)
Check channels are all Running
If any channels not in running status – report “channel not in running”
report for each channel: Channel name – NETTIME XQTIME BATCHSZ XBATCHSZ

How do I turn on Chinit SMF every time?

July 11 2017
This was a recent question… and the answer is… use the Chinit //CSQINPX datasset. This is similar to the queue manager //CSQINP2.

You can put other commands like start listener in this data set.

Or – thanks to the hint from Norbert Pfister



before your start the CHINIT.

Tuning MQ for QREP on z/OS

July 10 2017 ‎

Q Replication is a component of IBM Data Replication (IIDR). QREP is a product with two parts.

  1. The capture part reads DB2 logs on one systems – puts the data into MQ messages which get sent to a remote site.
  2. The QREP apply part reads the MQ messages and applies the changes to a database at the remote site.

QREP is used by GDPS AA to replicate DB2 data from a z/OS primary site to a secondary site. It is also used to replicate data from z/OS to a distributed database.

The scenario is high throughput and time sensitive; there is some tuning you can do in this environment.

  1. Do not mix QREP and non-QREP workloads on the same queue manager
  2. Where there is significant distance between the capture and apply parts, have a separate queue manager for each and link the two with MQ sender and receiver channels
  3. You get maximum throughput when you use multiple transmission queues and multiple channels. This is known as the parallel sendqs feature in QREP 4 is a good number for many people – in our test systems, the maximum throughput we saw used 8 channels. The optimal number of channels may be a function of distance. One customer had 4 channels with a distance of 1000 kilometers another customer used 2 channels at 100 kilometers
  4. On the capture side have one transmission queue per page set
  5. At the apply side there is only one queue – this needs its own page set. It must have an index on MSGID defined.
  6. Some people allocate a big pageset, or cause expansion to the size they need, to hold the expected peak number of messages before they run QREP in production.
  7. Use MQ V8 with large buffer pools defined above the 64 bit bar. LOCATION(ABOVE) . For maximum performance used page-fixed buffers, PAGECLAS(FIXED4KB)
  8. Make the buffer pool size the size of the page set + 20%. If the buffer pool gets over 85% full it starts writing pages to the pageset.
  9. Check the DASD response times for the logs and page sets during peak time
  10. Start with configuring MQ channels for QREP with BATCHSZ(200) and BATCHLIM(100000) so 100MB for Batchlim.
  11. Make sure your network is using large TCP send and receive buffers ( 2MB). See what’s new in TCP 2.2
  12. Set TCP buffer sizes to be used by MQ. See CHINTCPRBDYNSZ and CHINTCPSBDYNSZ in Getting the best throughput with MQ TCPIP channels
  13. With MQ V8 and above, the tuning QREP set PRUNE_BATCH_SIZE of 1000 messages on apply side generally works well.
  14. Depending on the workload profile using QREP QREP MAX_MESSAGE_SIZE of 1MB may give improvements in throughput.
  15. There are some MQ tuning parameters specificaly for QREP. READAHEAD(ON) and RAHGET(ON) on both source and target q manager, the default are OFF for both parms. See the QREP documentation

These blog posts are from when I worked at IBM and are copyright © IBM 2017.