IBM Blog 2018 March

What’s the cost of using a MQ message selector to select a message from a queue?

Feb 28 2018

The message selector is a bit like getting a message from the queue by msgid or correlid when the queue is not indexed.
If there are 500 messages on the queue, and you want to get the 500th message then the queue manager scans the queue of messages sequentially to see if it matches matches the msgid or correlid. It takes CPU to get each message and check it. Use of the INDXTYPE attribute on the queue allows MQ to scan the index and directly access the required message.

The message selector logic is more complex than this.
Instead of simply checking one field for msgid or correlid, during each MQGET, the queue manager has to check the message payload for the specified fields.
Each properties in the RFH2 header has to be checked.
If you want to use multiple selectors, even more work has to be done for example “colour=blue or colour=red” means there are two fields to check.
This means that MQGETS with message selectors will be more expensive than a simple MQGET-next (or MQGET-specific on indexed queue).

The selectors are specified at open time, so the open costs will increase.

What are the costs?

As an example,
1500 messages were put on a queue, 500 with colour=blue, 500 with colour=red, and colour=green. The putting jobs ran concurrently so the messages were mixed up on the queue.

We ran a getting job which selected colour=green, and got the 500 messages.

  1. The CPU for the open went from 40 to 130 microseconds.
  2. The average CPU time for the get was 350 microseconds compared to 60 for a simple get first.
  3. The SMF data showed that 188580 messages were skipped.

The first MQGET may have skipped 2 messages, the last MQGET may have skipped 499 messages – depending on the layout of the messages on the queue.

What can I do to reduce the costs?

  1. Avoid deep queues – for example use a queue each for PAYROLL_REPLY and PAYMENTS_REPLY and avoid a common reply to queue
  2. Only have as many message properties as you use
  3. Specify the minimum number of selection properties.

Thanks to Tony Sharkey for his help (and numbers).

Saving CPU by having a job that does nothing!

Feb 27 2018 ‎

With shared queue, there is a first-open and last-closed effect which can increase the cost of opening a queue.

When an application opens a share queue, if no other application on that queue manager has the queue open, then the queue manager has to go to the coupling facility to register an interest in the queue, and to get information about the queue. This is the first-open effect.
If the queue manager already has the queue open – it does not need to register, it already has the information, and so saves CPU.

When an application closes a share queue, if no other application on that queue manager has the queue open, then the queue manager has to go to the coupling facility to deregister an interest in the queue. This is the last closed effect.

Some applications naturally have the queue open for a long time, for example a channel, a trigger monitor, and long running applications. For high throughput, short lived transactions, the queue may always have at least one application with the queue open. In these cases the first-open, and last-close effects are not seen.

If you have short running transaction which are not at a high enough throughput to have the queue open all of the time, there may be little or no overlap, and so each transaction can experience the first-open and last- closed effects.

How can I tell?

In MP1B, if you have detail(15) or higher value you will get in the task statistics information like
Open CF access                     1   
Open No CF access                  0   
and
Close CF access                    1   
Close No CF access                 0

The “Open CF access” and “Close CF access” counts tells you that the open and close had the first-open and last-close effects.

For those who want to know the internal details…

  • The SMF field opencf0 is the number of opens that did not have the first-open effect.
  • The number of opens that had the first open effect is openn- opencf0
  • Similarly with close, “Close No CF” access is field closecf0, closed with CF access is closen – closecf0.

Latches used

Within the queue manager, latches are used to serialize requests. A latch is held across the first-open and last-close operations. If another application wants to open the queue, then it will have to wait for the latch to be released. This latch used is latch 11 DMCSEGAL. As the latch is held across CF requests, a long CF response time, for example an Async request, will lead to long latch times.

What can I do about it?

If you see a large number of first-opens, and last-closes then having a batch job sitting there with the queue open, will eliminate this first-open and last-close effects.
This should greatly reduce the DMCSEGAL latch times. You should also review the CF response times, and check the performance of the structure and the CF to see if you can reduce the CF response times.

What is the impact on multiple queue managers?

The coupling facility is a classic server when discussing queuing. As the number of requests increase, the response time also increases – it might increase by a small amount – such as 10% – or on a very busy server it can double the response time.
Having multiple queue managers, with the first-open, last-closed effect will increase the response time because of the increased number of CF requests.
If all queue managers use have a batch job which does nothing, but keep the queue open, this should eliminate many CF requests, and so all applications using the structure should benefit – and will reduce the CPU used!

MP16 has been updated to include CPU figures search for first-open from MP16

At the title of the blog post says – having a job which does nothing can save you CPU!

My favourite USS shell commands

Feb 26 2018

With MQ components, Managed File Transfer (MFT), AMS, MQWEB and REST API, more z/OS function use Unix service. Here are some of the useful USS concepts and commands I frequently use.

I use ls -ltr and cd commands the most.

Thanks to Mark Hiscock for sharing his list of favourite commands.

This post has links to the z/OS USS Knowledge Centre (in blue) for the commands

What is my home directory?

When I logon my RACF profile OMVS Segment says HOME= /u/paice .
When I get into USS my default directory is /u/paice

What is the syntax of…? man — Display sections of the online reference manual

Use the man command to display the syntax of commands.

man ls

Moving about the directory tree

You use the cd to change directory. cd — Change the working directory

. current directory
cd ~ home ( where you get when you first logon – mine is /u/paice) – so if I am in /mqm/V900 cd ~ gets me to /u/paice
cd .. go up one level if you are in /u/paice/programs/source cd .. gets you to /u/paice/programs
cd ../.. go up two levels if you are in /u/paice/programs/source cd ../.. gets you to /u/paice/
cd prog*/so* this assumes that from where I am currently, there is a unique name beginning with prog*, and then within that directory there is a unique directory name so*

Which directory am I currently in? pwd — Return the working directory name

pwd

Listing files and directories – use the ls command ls — List file and directory names and attributes

ls lists the names of the files in the current directory
ls ~ list the names of the files in my home directory
ls /tmp list the names of the files in the specified directory
ls -ltr list the names, permissions, efc of files in the current directory so the latest changed files are at the bottom
-l Displays permissions, links, owner, group, size, time, name. See Long output format for details.
-t Sorts by time.
-r reverse the sort
ls -a includes “hidden” files beginning with a . – such as .profile and .sh_history

Copy files cp — Copy a file
cp here there
cp -R here there “Clones” the source trees. cp copies all the files and subdirectories specified by here… into there,

Move files mv — Rename or move a file or directory
mv from to rename or move files or directories

Remove (delete) rm — Remove a directory entry

rm file remove a file
rm -R name recursively removes the entire directory structure if name is a directory.

Make directory mkdir — Make a directory

mkdir name make a directory

Remove directory rmdir — Remove a directory
rmdir name remove directory

Change ownership information chown — Change the owner or group of a file or directory
chown owner[:group] file name change the owner and group of the file
chown -R owner[:group] pathname changes the owner of a directory and all files within the directory
chmod 740 fred makes the file read write by the owner ( 7) read only for the group (4) and no access to others
chmod o+w fred give “others” write access to file
chmod -R g-w freddir remove write access to the group for the directory and all directories below it

Search for interesting file names find — Find a file meeting specified criteria

find . “*.txt” find all files ending in .txt in the directory tree from where I am. This is very powerful command, you can search for files which are APF authorised, or older|newer than a certain number of days

find . -name '*.rexx' -exec ls -ltr {} \;

finds all files matching *.rexx and executes the ls -ltr command on each line so it produces the files and the date, owner info etc. For example

-rwxrwxrwx. 1 paice paice 350 Dec 29  2015 ./ftp.rexx
-rwxrwxrwx. 1 paice paice 541 Dec 29  2015 ./deb.rexx

What environment variable are set? set — Set or unset command options and positional parameters

To set an environment variable you can use export — Set a variable for export

You have a .profile which you can use to preset variable.

For example I have

export JAVA_HOME=/java/J64/
export WLP_USER_DIR=/u/paice/MVCA/mqweb

to use the MQWEB stuff.

To find what JAVA_HOME is use

set |grep JAVA_HOME

and it gave me

JAVA_HOME="/java/J64/"                                               
OLDJAVA_HOME="/java/java80_bit64_sr4/J8.0_64"

Display space commands df — Display the amount of free space in the file system and du — Summarize usage of file space
df -P . df — Display the amount of free space in the file system
du . displays space used of all files in du — Summarize usage of file space

Set extended attributes extattr — Set, reset, and display extended attributes for files

Display attributes using

exattr  /usr/sbin/proga
ls -E /usr/sbin/proga

Set using

exattr +p /usr/sbin/proga

You can set the following attributes

a APF

l shared

p define as Program Controlled. For example message BPXP015I HFS PROGRAM .. IS NOT MARKED PROGRAM CONTROLLED

s to do with sharing

If you get problem after you have unpacked a file check you used the correct options. For example

tar -xpoXf ….

Search for data within files: grep — Search a file for a specified pattern
grep zzzz *.txt search for zzzz in all files lile *.txt
grep -i zzz “*.txt” ignore case of zzz

Pack and unpack tar — Manipulate the tar archive files to copy or back up a file

tar -xf ~/CSQ8JTR1.tar

expands (-x) the file (-f) ~/CSQ8JTR1.tar (in my current home directory ~) into the current working directory

Display file contents
cat file displays the file on the screen (with optional code page conversion) cat — Concatenate or display text files
more name displays the file a page at a time q or ::q to quit more — Display files on a page-by-page basis
head -n 10 file displays the first 10 lines of a file head — Display the first part of a file
tail -n -10 file displays the last 10 lines of a file tail — Display the last part of a file
tail -n +10 file displays the file from line 10 to the end tail — Display the last part of a file
oedit ISPF edit the file (for green screens) oedit — Edit files in a z/OS UNIX file system

obrowse ISPF browse a file obrowse — Browse a z/OS UNIX file
vi the well know editor – works in green screen and line mode

Sort sort — Start the sort-merge utility
sort -k… file you can sort by field or columns eg sort -k 2,4 sort on fields 2 through 4

sort -k1.22,-k1.27 file sorts on columns 22 to 27 of the file. This means columns 22 from the first field (-k1)

sort -u -k…. make sort field unique

sort -r … reverse the sort order

grep hd iostat.Dl |sort -r -n -k1.97,1.103 > sorted

from the file of iostat.DI ( AIX iostat times)

extract records with hd in them

sort in reverse order

columns 97 to 103

-n is treat as numbers so ’10’ is > ‘ 1’

put the output into the file sorted

taking this further

head -n 11 iostat.Dl|tail -2 > sorted

takes the top 11 lines … then the last 2 of these, and puts them into the file sorted. I get the column heading

then

grep hd iostat.Dl |sort -r -n -k1.97,1.103 > >sorted

the >> adds to the end of the file

so I get the column headers, and the sorted data!

I just wanted to extract some fields of data and sort them

grep hd iostat.Dl |awk 'BEGIN {FS=" "; OFS=","} {print $1, $14, $15}'|  sort -r -t, -n  -k2,2 |less

Where

awk 'BEGIN {FS=" "; OFS=","} {print $1, $14, $15}'

says the input fields are separated by FS

print .. prints the columns 1,14,15

separate the output fields by OFS ( ,)

sort.. -t, says the fields are separated by ‘,’

-n is sort numeric so ’10 is larger than ‘ 1’

-k2,2 is just sort on the 2nd field

Unique uniq — Report or filter out repeated lines in a file

Combine this with sort so you can merge two files and display lines that are the same, or different

uniq -u … display lines that are not repeated

uniq – c … put count on the front of each line

Put the output info a file instead of the screen > Redirecting command output to a file, Redirecting error output to a file

ls * > a puts the output from the ls * command into the file called a

ls * 1>a 2>b some command write to stdout and stderr … 1> is for stdout 2> for stderr

| the pipe. This can pass the output from one command as input to another Using a pipe

So now you can do things like

ls -lt * |head display the files sorted in date order , then take the top 10
ls -lr * |head > a takes the most recent 10 files and put the names in file a

bpxmtext nnnnnnnn What does that LE reason code mean? bpxmtext — Display reason code text

From message BPXF105E RETURN CODE 0000008D, REASON CODE 05620060. AN ERROR OCCURRED DURING THE OPENING OF HFS FILE

bpxmtext 05620060

Become super user – that is – all powerful su — Change the user ID associated with a session

su

allows you to access all files etc – and, if you are not careful, you can delete all files!

exit to get you back to safe mode.

Frequent tasks

  • Find large files : du -a . | sort -n -r | head -n 30
  • What address space has what port? : /bin/netstat -a | grep 12345
  • Convert this file to EBCDIC : iconv -f UTF-8 -t IBM-1047 File.txt > File.txt.ebcdic
  • Tag a file as ASCII for UNIX tools to access it
    • export _BPXK_AUTOCVT=ON which turns on the support then:
    • chtag -t -c iso8859-1 <filename> will tag <filename> as ascii. So tools like vi can access it
  • ls -lT <filename> will tell you whether the file is tagged or not.

What do the MQ DB2 plans do ?

Feb 26 2018

While looking into a performance problem, I could see from the DB2 performance data that an MQ plan was heavily used, so I thought it was worth documenting what the plans are used for.

  • CSQ5A900 Auth info
  • CSQ5B900 CSQ5PQSG batch utility for adding and removing queue manager and queue sharing group definitions
  • CSQ5C900 Shared channel
  • CSQ5D900 Delete object
  • CSQ5K900 Shared sync key
  • CSQ5L900 list object
  • CSQ5M900 Large message object manipulation (BLOB entries)
  • CSQ5R900 Read object
  • CSQ5S900 Shared channel status
  • CSQ5T900 Structure manipulation
  • CSQ5U900 update object
  • CSQ5W900 insert object
  • CSQ52900 used by the CSQUTIL program to access DB2. The program is called when when the SDEF function is requested for QSGDISP(GROUP) or QSGDISP(SHARED) objects.
  • CSQ5Z900 Service utility — DB2 function

What’s the CPU cost of not defing a queue?

Feb 24 2018 ‎

The obvious answer of almost nothing – is half right!

If you are not using shared queue then the queue manager quickly does a check and returns MQRC_UNKNOWN_OBJECT_NAME (2085).

If you are using shared queue, then the queue manager has to go the the queue manager DB2 tables to check. In round numbers this doubles the cost of a normal MQOPEN.

How can you tell?

You can use MQ events to produce an event when an non zero return code is produced. See below

I spotted this from the MQ accounting information. It had

== DB2 activity :          1 requests                                    
> Average time per DB2 request-Server :      784 uS                      
> Average time per DB2 request-Thread :      784 uS                      
> Maximum time per DB2 request-Server :      784 uS                      
> Maximum time per DB2 request-Thread :      830 uS                      
> Bytes put to DB2    :        0                                         
> Bytes read from DB2 :        0

and there was no queue record in the SMF data.

You might also see high activity in the DB2 plan CSQ5R*

In the DB2 accounting data, I had entries with CORRNAME: MQPADB2S where MPQA is the queue manager name, and DB2S is for a Server task

What is the cost?

Tony Sharkey did some measurements. A program just did 1 millions MQOPEN+MQCLOSE.

In round numbers… average cost per MQOPEN+(MQCLOSE) in microseconds

Queue existed Queue did not exist
QM costs 20 10
Application cost 40 60
Db2 cost 0 50
Total 60 120

What can you do about this?

If this is a surprise to you, and you have a system where you can run just a few transactions you can use the MQI call and user parameter, and GTF on z/OS.
it discusses how to take a GTF trace of the API requests. You can use this to trace one transaction and check the return codes from the API requests.

You can use queue manager events. For example An application issues an MQI call that fails. The reason code from the call is the same as the reason code in the event message. You can use the DISPLAY QMGR LOCALEV to check the status. I have not used this.

You can talk to your applications teams and discuss this with them.

One customer has an application, if the queue “PAYROLL.DEBUG” was defined ( and available) the application copied the input and output message to this queue to aid debugging. On a local queue manager this was a cheap call. On Shared queue this would be expensive

Where’s MP16 gone?

MP16: Capacity Planning and Tuning for IBM MQ for z/OS moved quite a few! months ago to Github, but I had failed to update my bookmark. If you are a bit like me, it is now here along with other MQ performance documents. So change your bookmark before you forget again!

I want to get the best throughput from Shared Queue

Feb 19 2018

We had a customer ask how they get the maximum throughput through MQ with Shared Queue.

As with most performance questions, the answer is it depends, and your mileage may vary (which means on different days a slight change in your environment may cause a relatively large impact to your throughput).

There is information on MQ performance, including shared queue in the MP16 SupportPac

One key concept is the Synchronous/Asynchronous IO to the Coupling Facility(CF).
An IO to DASD has the following

  1. Start the IO
  2. Suspend the task
  3. Wait for IO
  4. Resume the task

This is how an Async request works for the CF.
If the time to access the CF is very small, the CPU cost of the Suspend and Resume may be relatively expensive.

The Synchronous CF IO issues an instruction to the CF – and does not suspend. In concept, this is single instruction and is using CPU while it is executing. The duration of this instruction depends how long it takes to get to the CF, as well as the duration while the request is being actioned in the CF.

z/OS looks at the time of the Sync IO, and if this is more than the cost of Suspend + Resume, then it is likely to use the Async instruction.

The duration of a sync IO is of the order of 10 microseconds. The duration of an Async IO may be 100 microseconds. So Sync is better than Async.

The major impact on throughput, is the coupling facility, the time taken to get to the structure, and the time to process work in the CF.

  1. CF placement – closer is faster than remote
  2. CF CPU type (dedicated ICF better than CFCC thin interrupt, is better than dedicated GPs, is better than shared GPs)
  3. The links between the z/OS and the CF. For example real cables, or the Internal Coupling Facility within the same z processor.
  4. As the CPUs in the CF get busier, the response time increases – the classic performance problem
  5. As the IO to the CF get busier, the response time takes longer – again the classic performance problem
  6. If there are multiple structures in the CF. Structure_A can be really busy, while Structure_B is not busy. But because of the IO to Structure_A, the IO for Structure_B is impacted.
  7. There gets a point where it is more efficient to use Async rather than Sync requests, and so the applications will see jump in response times. You can use the z/OS performance reports for the CF to see performance information about CFs and Structures
  8. Duplexing tends to take longer -as there is a local CF and a remote CF, so updates takes longer as the operation needs to complete on both CFs before the update is complete.
  9. We have seen situations where it is more efficient to offload data to SMDS because writing the small amount of data to the structure was a Sync request – but writing the entire message to the structure used Async

A slight change to your environment, for example a CICS structure has slightly more activity, can make the MQ requests go from Sync to Async – and so take longer. This can change from moment to moment.

Other considerations

  • With more data in a message – the IO takes longer. Around 32KB there may be a switch from Sycn to Async. This depends on the environment.
  • Commits have to go to every structure involved in UOW so best to have all messages in one structure
  • Have unique/few msgid/correlid. Avoid multiple ( >100) messages with the same msgid or correlid.
  • Persistent in sycpoint is good – but is persistent necessary?
  • Use get wait – do not poll the queue
  • Have a queue open on LPAR to avoid the “first open on LPAR” effect. If a queue manager does not have the queue open, then an open request will go to the CF to get information about the queue, and ask to be notified about any changes to the queue. If the queue is open in a queue manager, then the queue manager already has information about the queue.
  • When a queue is closed, if it is the last instance on that queue manager, then the QMGR goes to the CF to say it is not interested in the queue. Having a batch job open the queue, then sit there doing nothing, will ensure a queue manager has the queue open.
  • We ran a test and found that running 20 queues with a queue on each structure gave slightly higher throughput than 20 queues in one structure.
  • Deep queues. Backing up the structure causes messages to be read from the structure. The more data in the structure means a bigger impact on the application response time due to increased IO time, and CPU busy while the backup is in progress.

What can you do?

  • Monitor the response times for example using RMF to display the CF responses and the utilization every minute.
    • Monitor the delayed requests in the CF report for the structures (basically monitor the structures generally as that includes the ratio / response times)
    • Review the CF as a whole, and the structures in it, and move heavily used structures to a different CF
  • Monitor the ration of Sync and Async for your structure
  • Monitor the CF busy %. If it is above 65% then you may need to add more CPs.
  • Use MQ accounting class(3) to see how many structures are being used.
  • Use MQ accounting class(3) to see if messages are persistent or non persistent.

So is it all about speed ?
No – speed and throughput are just part of the overall picture. You need to think about the business requirements, the cost, and problem scenarios.
My manager has a car which can do 150 Miles Per Hour – but his aged mother with a hip problem cannot get in and out of the car – and so the car is not meeting the “business requirements”.

A customer running on distributed MQ was having throughput problems due to the erratic disk response time. They made the decision to change from persistent messages to non persistent – to make MQ go faster! Many customers do use just non persistent messages, but the applications have logic to handle failures, such as lost message – but this is designed in. You need to understand the business requirements, and the application design to ensure that the applications can meet the business needs.

It you have two sites, then the site closest to the Coupling Facility may process most of the messages – because the CF is closer. If you switch over to the other site, then it may be worth failing over to a CF on the other site – because then it will be closer to that site. The question of whether you have a CF at each site may come down to a business decision and the cost of providing two CFs

You have to consider what-if scenarios. Your systems may be running fine with only a few messages on the queues. What if a channel fails to start, or an application is not processing the queue. Will the queue grow till it fills the structure, or is there a sensible MAXDEPTH. If you are using SMDS and allow 1 million messages on the queue, how will it perform until the queues are empty. Do you want to limit the queue to 1000 messages and not use SMDS?

So as we often say ” it depends”

My uss file is not readable

Feb 9 2018 ‎

I was playing around with some files in USS, as part of MQWEB and MQREST, and found that somehow I could not browse or edit my xml files, because they looked scrambled up.

The solution was simple – the file tag had been changed. A file tag has information about the code page of the file see Flag tagging in enhanced ASCII and the chtag command in the z/OS documentation

For example, in USS,

the command

chtag -p  *.xml

gave

m ISO8859-1 T=off saf.xml

– untagged T=off server.xml

If I use

  1. oedit saf,xml, the file is readable. If I use hex on, I can see the blanks are x’20’ and so are in ASCII
  2. oedit server.xml the file is unreadable

I used

chtag -t -c  ISO8859-1 server.xml      

to tell USS this was in ascii, and I could then edit it.

chtag -p              server.xml

gave me

t ISO8859-1   T=on  server.xml

Should I duplex my non recoverable structure?

Feb 6 2018

A customer asked…. if we have a non recoverable structure just for non persistent messages – should this be duplexed?

Gwydion said…

It is not necessary to duplex a non-recoverable structure. Yes, it’s a single point of failure, but in the event of a failure the qmgr can allocate a new structure in seconds. as long as there’s somewhere to allocate it. The messages will all be lost at that point, but that should be OK as they’re non-persistent.

So no need to duplex the structure, but definitely worth having more than one CF available where the structure can be allocated.

Should I use more than one structure within a unit of work?

Feb 2 2018 ‎

In a different blog post I recommended that you have a structure for non persistent messages and another structure for persistent messages. Some one contacted me about this and said it was more subtle if you use more than one structure in a unit of work.

The recommendation is to try to use just one structure for messages within a transaction.

Within this,

  1. If all messages are non persistent, then you can use a structure with no recovery. This means the structure is available quick after a problem.
  2. Otherwise you need to use a structure with recovery. If the structure needs to be recovered, then data will be read from MQ logs and so it will take longer to recover the structure.

As a general rule, transactions should use persistent messages, or non persistent messages, but not a mixture of both.

In some cases there is a good business need for a mixture of persistent and non persistent message messages in a unit of work. For example

MQGET of a persistent message
MQPUT of a persistent reply
MQPUT of a non persistent message for auditing/monitoring saying “message XYZ was processed at 12:45”
MQCOMMIT

What is the impact of putting messages to more than one structure in a unit of work?

The short answer is it uses more CPU, and takes longer. The increase depends on your configuration and number of structures used.

Messages in the same structure

Consider a transaction which does an MQPUT of two message to different queues, where both queues are in the same structure.

  1. The first MQPUT does one IO to the application structure
  2. The second MQPUT does one IO to the application structure
  3. The commit does two IOs to the Admin structure,
  4. The commit does one IO to the application structure with the command to commit both the MQPUTs (making the messages available to other applications)

A total of 5 IO requests to the CF structures

Messages in the a different structure

Consider a transaction which does an MQPUT of two message to the different queue, where both queues are in different structures.

  1. The first MQPUT does one IO to the first application structure
  2. The second MQPUT does one IO to the second application structure
  3. The commit does three IOs to the Admin structure
  4. The commit does one IO to the first application structure with the command to commit the MQPUT(making the messages available to other applications)
  5. The commit does one IO to the second application structure with the command to commit the MQPUT(making the messages available to other applications)

A total of 7 IO requests to the CF structures.

Example measurements

On my test box I measured the two scenarios, and looked at the MQ accounting class(3).

The application looped doing MQPUT to queue1, MQPUT to queue2, COMMIT of non persistent messages

Same structure Different structure
Duration of MQPUT 70 uSecs 70 uSecs
CF time of MQPUT 10 uSecs 10 uSecs
Duration of commit 67 uSecs 114 uSecs
Count of CF requests during commit 3 5
Sum of duration of CF requests in commit 23 uSecs 36 uSecs

What happens if I do an MQGET of a message in a different structure to the MQPUT?

It does not matter if the MQGET gets from a different structure to MQPUT, because the request to physically delete the message from the structure is not done as part of the transaction.

Are your servers PETS or CATTLE?

Feb 1 2018 ‎

Someone recently told me about this analogy. See here for an article and link to the original article.

Pets

  1. are given a cuddly name
  2. are unique
  3. are lovingly handcrafted
  4. when they are ill, you nurse them back to health

Cattle

  1. are given a number, not a name
  2. they are all very similar
  3. if one gets ill, you shoot it, and get another one

These blog posts are from when I worked at IBM and are copyright © IBM 2017.