Setting up a CA signed certificate for the mqweb server end

When using mqweb with certificates you can use

  • a self signed certificate to identify the server
  • a CA signed certificate to identify the server

You can use certificates to authenticate…

  • a self signed certificate at the client end
  • a CA signed certificate at the client end

This post explains how I set up mqweb to use a CA  signed certificate at the server, and to import the CA into my Chrome browser.

The steps are

  • Create your certificate authority certificate
  • Create the certificate request for mqweb server
  • Sign the request
  • Create the mqweb keystore and import the mqweb certificate
  • Import the CA into the web browser keystore if required

Create your certificate authority certificate

If you do not already have a certificate authority and a process for signing certificates you need to set these up.   To do it properly, you should create a certificate request and have an external CA sign it.

The following command creates a self signed certificate.   This is your CA authority certificate and private key.

openssl req -x509 -config openssl-ca.cnf -newkey rsa:4096 -subj “/C=GB/O=SSS/OU=CA/CN=SSCA” -nodes  -out cacert.pem -keyout cacert.key.pem -outform PEM

  • openssl req -x509 – create a self signed certificate request.  -x509 says self signed.
  • -config openssl-ca.cnf – use this file for the definitions
  • -newkey rsa:4096 – generate a new key
  • -nodes  – do not encrypt the private keys.  I do not know if this should be specified or not.
  • -subj “/C=GB/O=SSS/OU=CA/CN=SSCA” –  with this DN
  • -out cacert.pem – output the certificate.   This is used when signing.    This file is sent to all users.
  • -keyout cacert.key.pem – output the private key.  This is used when signing.  This files stays on the machine.
  • -outform PEM – in this format

In the config file, the important options I had were

[ req_extensions ]
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid:always, issuer
basicConstraints = critical,CA:TRUE, pathlen:0
keyUsage = keyCertSign, digitalSignature

You need to distribute the cacert.pem certificate to all your users, so they import it into their keystores.

Create the certificate request for mqweb server

The following command creates a certificate request which will be sent to the CA to sign.

openssl req -config mqwebserver.config -newkey rsa:2048 -out mqweb.csr -outform PEM -keyout mqweb.key.pem -subj “/C=GB/O=cpwebuser/CN=mqweb” -passin file:password.file -passout file:password.file

  • openssl req – as this request does not have -x509 specified it is for a certificate request
  • -config mqwebserver.config – use definitions in the specified  file
  • -newkey rsa:2048 – create a new  certificate request and private key with a 2048 bit  RSA key
  • -out mqweb.csr – use this output file for the request to be sent to the CA
  • -outform PEM – use pem format
  • -keyout mqweb.key.pem – put the private key in this file
  • -subj “/C=GB/O=cpwebuser/CN=mqweb” – with this distinguished name. It can have any values which meet your enterprise standards.
  • -passin file:password.file -passout file:password.file – use the passwords in the file(s).  The file:… says use this file. You can specify a password instead.  As the same file is used twice, two lines in the file are used.

In the config file, the important options were

[ req_extensions ]
subjectKeyIdentifier = hash
basicConstraints = CA:FALSE
subjectAltName = DNS:localhost, IP:127.0.0.1
nsComment = "OpenSSL mqweb server"
keyUsage = critical, nonRepudiation, digitalSignature, keyEncipherment
extendedKeyUsage = critical, serverAuth

Sign the request

Send the *.csr file to your CA.

Initial setup of ca signing process

If this is the first time you are using your CA you need to set up some files.  The files are referred to in the config file used by openssl ca command

touch index.txt
echo '01' > serial.txt

  • index.txt  is the certificate database index file.  This name is used in the config file option database =… .  For the format of this file see here. 
  • serial.txt contains the current serial number of the certificate. This name is used in the config file option serial =… .

Sign the certificate request

This takes the .csr file and signs it.

openssl ca -config casign.config -md sha384 -out mqweb.pem -cert cacert.pem -keyfile cacert.key.pem -infiles mqweb.csr

  • openssl ca – do the ca signing
  • -config casign.config – using the specified config file
  • -md sha384 – what message digest strength to use
  • -out mqweb.pem – put the signed certificate in this file
  • -cert cacert.pem – sign it with this ca file
  • -keyfile cacert.key.pem – sign it with this ca private  file
  • -infiles mqweb.csr – this is the input certificate request file

This displays the certificate, so check it is correct.  You get prompted

  • Sign the certificate? [y/n]:y
  • 1 out of 1 certificate requests certified, commit? [y/n]y

In the config file the important section is

[ ca_extensions ]
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid,issuer
basicConstraints = CA:FALSE
keyUsage = keyEncipherment
extendedKeyUsage = serverAuth

Send the signed certificate back to the requestor.

Create the keystore to be used by mqweb and import the certificate

You can delete the certificate in the keystore using runmqckm -cert -delete -db mqweb.p12 -pw password -label mqweb .  This is not required.

openssl pkcs12 -export -inkey mqweb.key.pem -in mqweb.pem -out mqweb.p12 -CAfile cacert.pem -chain -name mqweb -passout file:password.file -passin file:password.file

This command creates the keystore (if requried) and imports the signed certificate and private key into the store with the specified name.   If a certificate exists with the same name, it is replaced.

  • openssl pkcs12 -export – this says a pkcs12 file will be created
  • -inkey mqweb.key.pem – using this private key file
  • -in mqweb.pem – and this signed certificate
  • -out mqweb.p12 – output put it to this pkcs12 keystore, used by mqweb
  • -CAfile cacert.pem – using this CA certificate
  • -chain – include all of the certificates needed when adding the certificate to the key store
  • -name mqweb – create this name in the keystore.  It is used to identify the key in the key store.
  • -passout file:password.file -passin file:password.file – use these password files

There is no config file for this operation.

Use chmod and chown to protect this keystore file from unauthorised access.

Change the mqweb configuration file.

<keyStore id="defaultKeyStore" 
          location="/home/colinpaice/ssl/mqweb.p12"  
          type="pkcs12" 
          password="password"/> 
<!-- the trust store is used when authenticating 
<keyStore id="defaultTrustStore" 
          location="/home/colinpaice/ssl/key.jks" 
          type="JKS" 
          password="zpassword"/>
-->
<ssl id="defaultSSLConfig" 
     keyStoreRef="defaultKeyStore" 
     serverKeyAlias="mqweb" 
     clientAuthentication="false" 
     clientAuthenticationSupported="false" 
/>
<!--trustStoreRef="defaultTrustStore" sslProtocol="TLSv1.2"
-->

The keystore name and server key alias which identifies which certificate to use,  are highlighted.

Stop mqweb

/opt/mqm/bin/endmqweb

Start mqweb

/opt/mqm/bin/strmqweb

Check /var/mqm/web/installations/Installation1/servers/mqweb/logs/messages.log for

Successfully loaded default keystore: /home/colinpaice/ssl/ssl2/mqweb.p12 of type: pkcs12.   This means it has successfully opened the keystore.

If you do not get this message use a command like grep ” E ” messages.log  and check for messages like

E CWPKI0033E: The keystore located at …. did not load because of the following error: keystore password was incorrect.

Import the CA certificate into Chrome

You need to do this once for every CA certificate

I have several profiles for Chrome.  At one point it hickup-ed and created a new profile, my scripts carried on updating the old profile until I realized a different profile was now being used.

Find the keystore

In Chrome use the url chrome://version/ this gives the profile path, for example /home/colinpaice/snap/chromium/986/.config/chromium/Default

You can remove the old certificate CA

certutil -D -d sql:/home/colinpaice/snap/chromium/986/.pki/nssdb -n myCACert

  • certutil -D – delete the certificate
  • -d sql:/home/colinpaice/snap/chromium/986/.pki/nssdb – from this keystore directory
  • -n  myCACertr with this name

Add the new CA certificate

certutil -A -d sql:/home/colinpaice/snap/chromium/986/.pki/nssdb -t “C,,”  -i cacert.pem -n myCACert

  • certutil -A – add a certificate
  • -d sql:/home/colinpaice/snap/chromium/986/.pki/nssdb – into this keystore directory
  • -t “C,,” – give it these permissions.
    • C says Trusted CA.   The certificate appears in Chrome under “certificate authorities”
  • -i cacert.pem – import this certificate
  • -n myCACert – with this name

Tell Chrome to pickup the changes

Use the url chrome://restart to restart chrome

Try using it.   Use the url like https://127.0.0.1:9443/ibmmq/console/

You should get the IBM MQ Console – login

Understanding the TLS concepts for using certificates to authenticate in mqweb.

This blog post gives background on setting up mqweb to use digital certificates. The IBM documentation in this area is missing a lot of information, and did not work for me.   I started documenting how to set up the certificates for the mqweb support, and found I was having to explain details of what was needed, before giving the command to do it.  I have also tried to cover it from a production environment, where you will use signed certificates, and automated deploy etc, instead of GUIs.   I suggest you read it before trying to set up mqweb with TLS .   Most of the content applies to the REST API as well as web browsers, but tools like CURL and Python  are much easier to set up and use.

If I have made mistakes (and it is easy to do so) or there are better ways of doing things,  please let me know.

It covers

  • How certificates are used
  • Different levels of mqweb security and authentication
  • How is mqweb security set up
    • The mqweb server end
    • At the web browser
  • How does the handshake work if certificates are being used?
  • Commands to issue REST commands to MQWEB
  • Setting up certificates.
    • Distinguished names
    • Certificate purpose for signed certificates
    • Self signed certificate
    • Strong protection
    • Certificates expire
  • Encrypting passwords in the config file
  • Which keystores should I use

How certificates are used

As part of the TLS handshake to do authentication, a certificate is send from the sender to the end user to identify the server. If you are doing authentication with certificates, a “client” certificate is sent up to the server providing information about the subject (person) and how the certificate can be used, for example for authentication or just providing encryption.

You can have “self signed” certificates and certificates signed by a certificate authority.

With self signed, you need a copy of the certificate at each end of the connection, so the certificate received from the client is checked with the server’s copy.  With more than one certificate, it quickly becomes impractical to use self signed certificates, and these would not be used in production. (Consider you have 100 partners you work with; you have 100 Linux servers you need to keep up to date; and on average, a certificate expires every week) . Also emailing a copy of the self signed certificate to a recipient is open to a “man in the middle” attack where your certificate is replaced with a bogus one, and so is not secure.  Self signed are fine for test systems.

The term Certificate Authority(CA)  is a term used for a chain of certificates which are used to sign other certificates.  If both ends share a common certificate, then the receiver can validate the payload.  Strictly, the CA owns the root certificate which certifies the rest of the chain.

With a signed certificate, a checksum of the certificate is encrypted by the signer, and appended to the certificate as part of the “signing process”. When the a certificate is sent to a recipient, it contains the certificate authority’s certificate. The recipient checks to make sure the certificate authorities match, then does the same calculation on the checksum and compares it with the value passed in the certificate. If they match the passed certificate is valid.
This means that the server only needs to have the certificate authority certificate chain in its trust  store, it does not need the individual client certificates.  Similarly when the server sends down the certificate to the browser, the browser does not need a copy of the certificate, just the CA chain.  If you change the certificate at the server end, you do not need to change the browser end.

This greatly simplifies the use of digital certificates, as you only need one certificate in the server, to identify the server, each user needs just one certificate and there is no emailing of certificates.

CA signed certificates are more secure than self signed certificates, and the validation has stronger checks.  A certificate has “extensions” which define how the certificate can be used, for example signing code, but not for authentication; it can be used to identify a server, but not an end user.

Certificates have a validate date range not-valid-before and not-valid-after. You need a process to renew certificates before they expire. This applies to the “client” certificate and the certificate authority certificate.

Certificate have a Distinquished Name, for example CN=colinpaice, OU=Test, O=IBM,C=GB. Where CN is Common Name, and is the user name, OU is Organization Unit, O is the Organization and C is the Country.

When I used curl with a self signed certificate I got

curl: (60) SSL certificate problem: self signed certificate in certificate chain
More details here: https://curl.haxx.se/docs/sslcerts.html

curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.

If you use the curl – – insecure option  you avoid this message, and it works.

Different levels of mqweb security and authentication

There are different levels of security and authentication

  1. No TLS. Data flows between a web server and web browser in clear text.  This is not used by mqweb.
  2. The server sends a certificate to the web browser. This provides an encryption key for traffic. It also provides the URL of the web server.  For a CA signed certificate, the web browser then checks the URL coming from the web server, with the names in the certificate to make sure they match. This gives protection from a bad person trying to point your browser at their server using a different URL. If you are not using certificates to authenticate, you logon with userid and password.  Data flowing is encrypted using information sent down in the certificate.   This is used by mqweb.
  3. As 2), and in addition, a certificate is sent up from the client and used instead of entering a password and userid. This is configurable at the mqweb level.  The certificate is validated by the server, and the Common Name is used as the userid.  This means
    1. Every certificate needs a unique CN
    2. a password is not used
    3. DN:CN=colinpaice,O=SSS,OU=TEST,C=GB and DN:CN=colinpaice,O=SSS,OU=PROD,C=GB

would both be accepted for userid colinpaice if they both had the same CA.

In all cases the userid is checked with the information in the mqweb configuration xml file.

You can specify userids or groups in the xml file.  It is better to specify groups than userids as it is less administration. If a person joins a department, or leaves a department, then you just have to update the group, not all of the configuration files.

How is mqweb security set up?

The mqweb server end

The server’s certificate and private key is stored in a key store – this store will have one, and can have many certificates and private keys

The certificates used for client authentication are stored in the trust store.

For example with the definition in the mqwebuser.xml

<ssl id="defaultSSLConfig"
  keyStoreRef="defaultKeyStore" serverKeyAlias="colincert"
  trustStoreRef="defaultTrustStore"
  sslProtocol="TLSv1.2"
  clientAuthentication="true"
  clientAuthenticationSupported="true"
/>

The keyStoreRef=”defaultKeyStore” says look for the definition with defaultKeyStore. This will point to a key store. Use the certificate with the name defined in the serverKeyAlias

So with

<keyStore id="defaultKeyStore"
          location="/home/colinpaice/ssl/key.pkcs12"
          type="pkcs12"
          password="zpassword "/>

the keystore /home/colinpaice/ssl/key.pkcs12 with certificate colincert will be used.

The trustStoreRef=”defaultTrustStore” says look for the definition with defaultTrustSstore. This will point to the trust store.

With

<keyStore id="defaultTrustStore"
   location="/home/colinpaice/ssl/key.jks"
   type="JKS"
   password="zpassword"/>

the trust store at location /home/colinpaice/ssl/key.jks will be used.

This needs to have the CA certificates, and any self signed certificates.

At the web browser

The web browser needs to have the self signed certificate or the CA certificate, for the certificate which is sent to the browser as part of the handshake “Hello I am the server”, so it can be validated.  If the certificate cannot be validated,  for example there was no copy of the self signed certificate, the browser will notify you and may allow you to accept the risk.   This is insecure, and ok for a test environment.

If certificates are to be used for authentication, then the web browser also needs the user’s certificate and private key, along with with any CA certificates.

How does the handshake work if certificates are being used?

The web server extracts the specified certificate from the keystore and sends it to the web browser.

The certificate is validated either with the browser’s self signed copy, or with the browser’s copy of the CA certificates.

For a signed certificate, the url from the web server is checked with the names in the “Certificate Subject Alternate Name” extension within the certificate. For example IP:127.0.0.1

If client authentication is used, the certificates are read from the server’s trust store. The DN of the self signed certificates and the DN of the CA certificates are extracted and sent down to the browser.

The browser takes this list and checks through the “user” certificates in its key store, for those self signed certificates with a matching DN, or those certificates which have been signed by the CA.

The list of matching certificates is displayed for the user to select.
For example

If the mqweb server trust store had

  • CN=colinpaice,ou=SSS1,o=SSS,C=GB self signed
  • CN=CA,O=SSS,C=GB Certificate Authority

and the browser’s keystore had

  • CN=colinpaice,ou=SSS1,o=SSS,C=GB self signed
  • CN=colinpaice1,ou=ZZZZ,o=SSS,C=GB self signed
  • CN=colinpaice2,ou=signed,o=SSS,C=GB signed by CN=CA,O=SSS,C=GB
  • CN=colinpaice3,ou=signed,o=ZZZ,C=GB signed by CN=CA,O=ZZZ,C=GB

the list displayed to the user would indicate the following certificates

  • CN=colinpaice,ou=SSS1,o=SSS,C=GB
  • CN=colinpaice2,ou=signed,o=SSS,C=GB

The Chrome and Firefox remember this decision until the browser is restarted, so if you want to change to use a different certificate – tough; you have to restart the browser.

The browsers use a keystore which you can access through the browser or commands like certutil or pk12util.

Commands to issue REST commands to MQWEB

If you use commands like curl, you can specify the certificate authority file, the user certificate, and the user private key using parameters. You do not need a keystore.  For example 3 .pem files are used, one for the certificate, one for the private key and one for the CA certificates.

curl –cacert ./cacert.pem –cert ./colinpaice.pem:password –key colinpaice.key.pem https://127.0.0.1:9443/ibmmq/rest/v1/admin/qmgr

If you use python requests package you can issue “requests” and again specify the various files needed for the certificates.

Setting up certificates.

It is fairly complicated setting up certificates as there are many options.

Distinguished names

Your organization will have standards you need to conform to. For example

The Distinquished Name (eg CN=colinpaice,ou=test,o=SSS,C=GB). Organisation is usually your enterprise name, OU could be department name, CN could be userid or other name. You can set up MQ channels to restrict the DN for example to containing ou=test,o=SSS,C=GB.

I set up a DN with C=GB,O=cpwebuser,CN=colinpaice. If I look at the certificate it has CN = colinpaice, O = cpwebuser, C = GB – in a different order.

If I look in the browser’s keystore, the certificates show up under the “O” value, so under “org-cpwebuser”.

As the CN is used as the userid, make sure it is the same case. On my linux machine my userid is colinpaice, the CN must be colinpaice.

Certificate purpose for signed certificates

If you create a certificate signed by a CA (even your private CA) the following checks are done during the TLS handshake.

  • The certificate has been signed with “serverAuth” to allow it to be used to authorise servers.
  • The key usage has  been signed with “Authentication”
  • The certificate has the IP address of the server.

If any one of these fail, the handshake fails.

A certificate can be used for many purposes, for example signing code, signing certificates (by a certificate authority). These properties are defined when the certificate is signed.

Both client and server certificates needs keyUsage = digitalSignature.

The Extended Key Usage (EKU) indicates the purpose, or what the certificate public key can be used for.

  • A client certificate needs extendedKeyUsage = clientAuth
  • A server certificate needs extendedKeyUsage = serverAuth.

The command openssl x509 -purpose -in colinpaice.pem -inform PEM -nocert gave

Certificate purposes:

  • SSL client : Yes
  • SSL client CA : No
  • SSL server : No
  • SSL server CA : No

So we can see this has clientAuth key usage.

If these are not set up properly  you can get errors about unauthorized access and “SSLHandshakeException: null cert chain” errors.

use openssl x509 -in colinpaice.pem -noout -text |less
and search for Usage

This gave me

X509v3 Key Usage: 
   Digital Signature
X509v3 Extended Key Usage: 
   TLS Web Client Authentication

 

Self signed certificate

A self signed certificate is considered weaker than a signed certificate, and some of the checks are not done.

If you are using a self signed certificate, the certificate needs keyUsage = digitalSignature, keyCertSign , it does not need  the IP address of the server, nor extendedKeyUsage.

It needs to be installed into the browser keystore as Trusted.  If it is not flagged as trusted, then the browsers give errors, and may allow you to continue if you accept it the risk.

Strong protection

There are different strengths of encryption. Stronger ciphers take longer to break. Some encryption or hashing techniques are considered weak, and may not be accepted. For example SHA1 is weak (some browsers will not accept it), SHA256 and SHA348 are much stronger.

Some encryption can be offloaded to specialized chips, so have a smaller CPU cost. You need to use your enterprise standard levels of encryption and hashing.

Certificates expire

All certificates (user and CA) have a valid date range. If they are used beyond their end date, you will get an error, and it will not work. You need to have a process in place to renew certificate, including CA certificates.  If you use an external CA, it can take days to get a new certificate and deploy it.

Keystores

Some processing need a keystore.  For example Firefox and Chrome use a nss format keystore.  The mqweb server needs a keystore to store the certificate to identify the server, and can use a truststore to contain remote authentication certificates.

There are a variety of tools to manage keystores.  IBM provide GUI interfaces such as strmqikm, and a command line interfaces runmqikm.   There is keytool from Oracle, and other tools such as openssl, pkcs12 and certutil.

There are different sorts of keystores

  • nssdb – used by browsers to store keys
  • pkcs12 has certificates and the  private key
  • cms  contains keys certificate etc
  • jks – Java keystore  – used by java to contain certificates

Certificates can be in different formats

  • .pem this is a certificate encoded in base 64 encoding.   This may look like
-----BEGIN CERTIFICATE-----
MIIDfzCCAmegAwIBAgIURubFS5yEwZYi//7Qx0kTmX/LnMUwDQYJKoZIhvcNAQEL
...
-----END CERTIFICATE-----
  • .der this is a binary format

I found it easiest to create certificates using openssl, and import them into the keystores, rather than use a GUI and then export them.  This means it it can be scripted and automated, and requires less work.

Encrypting passwords in the config file

Keystores usually have a password.

For example in the mqwebuser.xml file is

<keyStore id="defaultTrustStore"
    location="/home/colinpaice/ssl/key.jks" 
    type="JKS"
    password="zpassword"/>

This password can encrypted using the /opt/mqm/web/bin/securityUtility command. See here.

For example to encrypt the string “zpassword”

/opt/mqm/web/bin/securityUtility encode –encoding=aes zpassword gives

{aes}AJOmiC8YKMFZwHlfJrI2//f2Keb/nGc7E7/ojSj37I/5

doing it again gave a different value

{aes}AMsUYgpOjy+rxR7f/7wnAfw1gZNBdpx8RpxfwjeIG8Wj

Your mqwebuser.xml file would then have

<keyStore id="defaultTrustStore"
    location="/home/colinpaice/ssl/key.jks" 
    type="JKS"
    password="{aes}AMsUYgpOjy+rxR7f/7wnAfw1gZNBdpx8RpxfwjeIG8Wj"/>

Which keystores should I use?

I used openssl to define and manage my certificates, because the IBM tools did not have all of the capabilities.

I used a pkcs12 (.pkcs12 or .p12) store for the keystore and a jks(.jks) store for the trust store.

Firefox and Chrome use a nssdb (Netscape) database.   I used tools pkcs12 to insert certificates, and certutil to display and manage the database.

Understanding queue high events and time since reset

I  was testing out my java program for processing PCF messages and snagged a “problem” with queue high events.

I had set max depth of the queue to 1.

I put two messages out of syncpoint, and got the following events on the SYSTEM.ADMIN.PERFM.EVENT queue.

"PCFHeader":{
  "Type" :MQCFT_EVENT
  ,"ParameterCount" :6
  ,"Command" :MQCMD_PERFM_EVENT
  ,"Control" :MQCFC_LAST
  ,"Reason" :Queue_Depth_High
  ,"MsgSeq" :1
  ,"Version" :1
  }
  ,"QMGR_Name" :QMA 
  ,"Base_Object_Name/mqca_Base_Q_Name" :CP0000 
  ,"Time_Since_Reset" :53
  ,"High_Q_Depth" :1
  ,"Msg_Enq_Count" :1
  ,"Msg_Deq_Count" :0

and

PCFHeader":{
  "Type" :MQCFT_EVENT
  ,"ParameterCount" :6
  ,"Command" :MQCMD_PERFM_EVENT
  ,"Control" :MQCFC_LAST
  ,"Reason" :Queue_Full
  ,"MsgSeq" :1
  ,"Version" :1
 }
 ,"QMGR_Name" :QMA 
 ,"Base_Object_Name/mqca_Base_Q_Name" :CP0000   
 ,"Time_Since_Reset" :0
 ,"High_Q_Depth" :1
 ,"Msg_Enq_Count" :0
 ,"Msg_Deq_Count" :0

 

I think this is telling me

  1. The Queue high event occurred 53 seconds after some other event
  2. One message was put, causing the queue high event
  3. The queue full event occurred 0 seconds after some event.  The documentation says “Time, in seconds, since the statistics were last reset”
  4. A message was put to the queue, unsuccessfully, as Enq_Count is zero.

I think the documentation is not clear.  I did not issue a reset statistics command.   The time since reset varies between 0 and over 6000 seconds.   My qmgr Stats interval was 10 seconds.

I think the time_since_reset is time since previous event or something else, so looks pretty useless.  When I issued a reset qmgr reset(Statistics) it made no difference to the time since reset.

This time value is MQIA_TIME_SINCE_RESET in the PCF.

 

I noticed that the event messages are non persistent, so make sure you process the messages before you shut down your queue manager!

Processing Activity Trace PCF messages with Java

This post covers some of the things I found about extending my java program for processing PCF, to cover activity trace 

Operation date has an inconsistent format

The field with MQCACF_OPERATION_DATE  is a field with format SimpleDateFormat(“yyyy-MM-dd” but is a 12 byte field (instead of the 10 you  might expect), and is padded with nulls instead of blanks.  You need to do date = …getStringParameterValue(MQConstants.MQCACF_OPERATION_DATE).substring(0, 10) to  capture the key data.

 

MQIACF_APPL_FUNCTION_TYPE.

This maps to fields MQFUN_TYPE_* with values like MQFUN_TYPE_UNKNOWN, MQFUN_TYPE_USERDEF, MQFUN_TYPE_JVM.   I got Unknown, and do not know what this means.  I think they are connected to the iSeries.

Using the high resolution timer, with microsecond precision.

Puts and gets have a high precision time value. The field MQIAMO64_HIGHRES_TIME gives a microsecond level precision of the time of the time.

To process these values I used

long milliseconds = (long) p.getValue()/1000;
long microseconds = (long ) p.getValue()%1000000;
Calendar cal = Calendar.getInstance();
cal.setTimeInMillis(milliseconds);
Date dateTimeVal = cal.getTime(); 
SimpleDateFormat sdfFormat = new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss", Locale.ENGLISH);
System.out.println(""High res timer:" + sdfFormat.format(dateTimeVal 
                                      + "." 
                                      + String.format("%06d", microseconds)  // add microseconds
                                      +"Z") ;

You can use the micro seconds value when doing calculations such as duration between operations.

You get the high resolution timer when doing MQGET, MQPUT, MQPUT1, for the other calls, such as MQOPEN, MQCLOSE, MQCMIT you only get the timer from the “Operation_Time” file for example 18:09:34.
It is not easy to find where your application is spending its time.

For example

MQ Verb OperationTime OperationDuration micro seconds Hi res timer
MQOPEN 18:09:34 56
MQPUT 18:09:34 66 18:09:34.747837
MQCMIT 18:09:34 12
MQGET 18:09:34 60 18:09:34.754299
MQCLOSE 18:09:34 54

This application was doing just MQ verbs. The time between the start of the get and the start of the put 6462 micro seconds.  The put took 66 microseconds, and the commit took 12 microseconds, so there was 6384 microseconds unaccounted for.    This reported duration is at the queue manager level.   There will be elapsed time from issuing the request to get into the queue manager, and back out again.

I put a microsecond timer in my client program to time the MQ requests.   After the MQOPEN, the first MQGET took 2800 microseconds, the second MQGET took 500 microseconds.

There is a big difference in the time as reported by the queue manager, and the time as seen by the application,

  • Work has to be done from the API request to be able to issue the network request.
  • The request has to flow over the network
  • The queue manager has code to catch the network request and issue it to the queue manager
  • there is the duration of the request – as seen by the accounting trace
  • The reply has to be transformed to a network request
  • The reply has to flow over the network
  • The reply has to be caught and returned to the application.

If the application was using local bindings it would be quicker, as there would be no network component.

The timer code was

long start = System.nanoTime();
queue.get(msg, gmo);
long duration_in_microseconds = (System.nanoTime() – start) /1000;

Note: the machine may not be able to provide a timer accurate at the nanosecond level.

Embedded structures

Requests for get have embedded structures for MQMD (type MQBACF_MQMD_STRUCT  -7028) and GMO (MQBACF_MQGMO_STRUCT – 7027).
Requests for put have embedded structure for PMO( MQBACF_MQPMO_STRUCT – 7029).

The parameter types are the values for MQ CallBack, and have been reused.

The data is a byte string (MQCFBS) .  I could not find how to easily extract the fields from these structures and display them with having to construct each field myself.   The integer fields can be in bigendian or little endian.  The version field for me was “01000000”  or “02000000” which little endian.  If it was big endian the value would have been “0000001”

I had to write some methods to chomp my way along the string to produce the fields.  I could not find a way of passing the string to a  method and get the fields back.

I created a method for longs, strings, bytes, byte.  I found it easiest to use the get StringValue for the structure.    This gives it to you in a hexadecimal string.   You need to do additional processing for example convert from hex to a String,  convert the string to an integer using base 16.   Byte arrays could be used unchanged.

More moans about the management of Monitoring in MQ

I was writing some Java code to process the various PCF messages produced by MQ.  I got the code working and handling most of the different sorts of message types, statistics, accounting,  change events etc in a couple of days.  I then thought I’d spend the hour before breakfast to handle the monitoring records on midrange MQ.   A couple of days! later, I got it to  process the monitoring messages into useful data.

I had already written about the challenges of this data in a blog post  Using the monitoring data provided via publish in MQ midrange, but I had not actually tried to write a program to process the data.  In the post below I have  included below some of the lessons I learned.

Summary of the Monitoring available

  1. Most records are published every 10 seconds, but I have had examples where records were produced with an interval ranging from 0.008 seconds to 1700 seconds.
  2. You can subscribe to different topics, for example statistics on the MQ APIs used, statistics on logging, and statistics on queue usage.
  3. The information is not like any other MQ PCF data where you have a field called MQConstants.MQCAMO_START_DATE.
  4. There is meta data which describes the data.   You get a field with Unit information, and a description.  Good theory, does not work well in practice.
  5. If you subscribe to Meta information you get information about the data fields.  This is not very helpful if you are running MQ in an enterprise where there is centralized monitoring and reporting, for example into Splunk, spread sheets on ElasticSearch, and where data is sent to a remote site.
  6. Data fields are identified by three fields; class,type and record.  Class could be DISK; type could be Log; and record is a number 12 meaning “Log write, data size”.

Subscribing to information.

You can write code to create a subscription when your program is running and so get notified of the data and meta data only while your program is running; or you can manually create a subscription, so you always have the data (until your queue fills up).

DEFINE SUB(COLINDISKLOG)
TOPICSTR(‘$SYS/MQ/INFO/QMGR/QMA/Monitor/DISK/Log’)
DEST(COLINMON)  USERDATA(COLINDISK)

Will get the data for Queue Manager QMA, the DISK, Log section.  Note the case of the data.

You cannot use a generic for the queue manager name, so you need a unique subscription for each queue manager.  (I cannot see why the the queue manager is needed on midrange, perhaps someone can tell me).  The queue manager name is available in the MQMD.ReplyToQMgr field.

You can ask that the meta information is sent to a queue (whenever the subscription is made) for example

DEF SUB(COLINDISKLOGMETA)
TOPICSTR(‘$SYS/MQ/INFO/QMGR/QMA/Monitor/METADATA/DISK/Log’)
DEST(COLINMON)  USERDATA(COLINDISK)

The queue can be a remote queue with the target queue on your monitoring environment.

MQRFH2 and message properties

With the data, there is message property data about the message, for example the topic, and any user data.  I could not get get the supplied java methods to skip the RFH2 data.   This was using the MQheader class.

If I used gmo.options = …  MQConstants.MQGMO_PROPERTIES_IN_HANDLE, the data was returned as properties, and not as an RFH2 header, so the PCFMessage class worked with the message.

For one of my monitoring records, the properties were

,"Properties":{
"mqps.Sud" :COLINDISK
,"mqps.Top" :$SYS/MQ/INFO/QMGR/QMA/Monitor/DISK/Log
}

Where mqps.Sud is MQSubUserData (from my subscription) and mqps.Top is MQTopicString.

Identifying the records and data

You may know from the queue name, that you are getting monitoring data.

You can also tell from the PCF header values, Monitoring data has Type :MQCFT_STATISTICS and Command :MQCMD_NONE.  Normal statistics data has  Type :MQCFT_STATISTICS and Command :MQCMD_STATISTICS_Q, so you can tell them apart.

The data and the meta data both come in as monitoring records.   You can tell from the topic in the message property, or from the content of the messages.
If the messages has PCF groups then the message contains meta data records – and should be processed to extract the identifiers, or skipped.

I found that you have to process the message twice to extract different fields
Pass 1 – extract

  1. Class
  2. Type
  3. Interval
  4. Count number of groups
  5. Display queue name if present

If the number of groups = 0 then

Pass 2 extract

  1. field type
  2. value

You need the class, type, and field type to be identify what the value represents.

The comments in the sample c program which formats the records, implies the records may not always be in the same sequence, so in theory (but unlikely) the class and type records may be at the end of the message instead at the front.  Doing two passes means you can extract the values before you need to use them.

Identifying what the data represents.

The meta data represents the interpretation of the value.   You can process the messages containing meta data, and dynamically build a map of values -> description.  In my “enterprise” this did not work for me, so I created a static map of values.  I created a key: 256 * 256 * Class + 256 * Type + field type , and created a hashmap of (key,description);

Part of the meta data is a “units” field

Units are

  1. “Unit”, for example the maximum number of connections in the interval
  2.  “Delta”, for example the number of puts in the interval (total number of puts at the end of the interval – number of puts at the start of the interval).  These “delta”s are often interesting as a rate, for example the number of puts/interval – display the rate of puts  as float with 2 decimal places.
  3.  “Hundredths”, for example CPU load over 1 minute.  To use this I converted it to float,  divide by 100, and printed it with two decimal places
  4. “KB”, I don’t think this is used
  5.  “Percent”, for example file system full percentage.  It reported 4016 for one value – you have to divide this by 100 to get the value (40.16), I converted it to float,  divided it by 100, and printed it with two decimal places.
  6. “Microseconds”, for example the time to write a log record
  7. “MB”, for example RAM size of the machine (in MB).   This matched the value from the linux free -lm command
  8. “GB”, I do not think this is used.

I do not expect the units will change for a record, because it would make the centralized processing of the records very difficult.

For my processing I changed the descriptions. For example I changed “Log – bytes occupied by extents waiting to be archived” to “LogWaitingArchiveMB“.   This is a better description for displaying in charts and reports, and includes the units.

My static definitions were like

//$SYS/MQ/INFO/QMGR/…Monitor/CPU/SystemSummary

// add(Class, Type, Field Type, Units, Description)

add(0,0,0,”Percent”,”UserCPU%”);
add(0,0,1,”Percent”,”SystemCPU%”);
add(0,0,2,”Hundredths”,”CPULoad1Min”);
add(0,0,3,”Hundredths”,”CPULoad5Min”);
add(0,0,4,”Hundredths”,”CPULoad15Min”);
add(0,0,5,”Percent”,”RAMFree%”);
add(0,0,6,”MB”,”RAMTotalMiB”);

// $SYS/MQ/INFO/QMGR/…/Monitor/CPU/QMgrSummary

add(0,1,0,”Percent”,”QMUserCPU%”);
add(0,1,1,”Percent”,”QMSystemCPU%”);
add(0,1,2,”MB”,”QMRAMMiB”);

// $SYS/MQ/INFO/QMGR/QMA/Monitor/DISK/SystemSummary

//add(1,0,0,…); missing
//add(1,0,1,…); missing
//add(1,0,2,…); missing
//add(1,0,3,…); missing
add(1,0,4,”MB”,”TraceBytesInUseMiB”);
….

// Below are the different topics you can subscribe on

// $SYS/MQ/INFO/QMGR/QMA/Monitor/DISK/QMgrSummary
// $SYS/MQ/INFO/QMGR/QMA/Monitor/DISK/Log
// $SYS/MQ/INFO/QMGR/QMA/Monitor/STATMQI/CONNDISC
// $SYS/MQ/INFO/QMGR/QMA/Monitor/STATMQI/OPENCLOSE
// $SYS/MQ/INFO/QMGR/QMA/Monitor/STATMQI/INQSET
// $SYS/MQ/INFO/QMGR/QMA/Monitor/STATMQI/PUT
// $SYS/MQ/INFO/QMGR/QMA/Monitor/STATMQI/GET
// $SYS/MQ/INFO/QMGR/QMA/Monitor/STATMQI/SYNCPOINT
// $SYS/MQ/INFO/QMGR/QMA/Monitor/STATMQI/SUBSCRIBE
// $SYS/MQ/INFO/QMGR/QMA/Monitor/STATMQI/PUBLISH
// $SYS/MQ/INFO/QMGR/QMA/Monitor/STATQ/queuename/OPENCLOSE
// $SYS/MQ/INFO/QMGR/QMA/Monitor/STATQ/queuename/INQSET
// $SYS/MQ/INFO/QMGR/QMA/Monitor/STATQ/queuename/PUT
// $SYS/MQ/INFO/QMGR/QMA/Monitor/STATQ/queuename/GET

Monitoring interval

The monitoring interval  field (MQIAMO64_MONITOR_INTERVAL) is in microseconds.  I converted this to float, divided the value by 1000000 to convert it to seconds, and printed it with a 6 figures after the decimal point ( String.format(“%.06f”, f); )

If this value was under 1 second (the smallest I saw was 0.008 seconds) I rounded it to 1.0 seconds.

Making the data more understandable.

To be strictly accurate the units should be MiB not MB, as 1 MB is 1,000,000 bytes and 1 MiB (Mebibyte) 1,048,576 bytes, (and if you see 1 mb… you get 8000 mb (milli bits) to the bytes – so if you get 50 mb/second broadband, I would complain).

You may want to convert “Log file system – bytes max” = 19549782016 in  bytes, to LogFSBytesMiBMax = 18644.125 MiB or even LogFSBytesGiBMax = 18.20 GiB, though it may be better to keep every thing in MiB for consistency.

Processing PCF with Java

I started writing a C program to process PCF data, to do a better job than the provided samples.  For example create a date-time object for tools like ElasticSearch and Splunk, and interpret numeric values as strings.   I found I was writing more and more code to fill in some of the holes in the samples.    I decided to use Java, as there are classes called PCFMessage, and code to allow me to iterate over the PCF elements.
I struggled with processing Monitoring messages, but I’ll create a separate post for the misery of this.

I also struggled with the accounting trace, and have blogged the specific information here.

I’ll give the key bits of knowledge, and let any interested reader fill in the blanks – for example adding try…catch statements around the code.

MQ provides a PCF Agent…

But it does not work for getting PCF messages.   It allows you to send a PCF request, and it returns the result.  This does not work for just getting PCF messages.

Connect to MQ

mq = new MQQueueManager(data.qm, parms);

Access the queue

queue = mq.accessQueue(data.queue,MQConstants.MQOO_BROWSE  | MQConstants.MQOO_FAIL_IF_QUIESCING);

Get the message from MQ

MQGetMessageOptions gmo = new MQGetMessageOptions();

gmo.waitInterval = data.waitTime;
gmo.options = MQConstants.MQGMO_WAIT
+ MQConstants.MQGMO_PROPERTIES_IN_HANDLE
// + MQConstants.MQGMO_BROWSE_NEXT
;
msg = new MQMessage();
queue.get(msg, gmo);

Display any message properties

The MQ PCF code does not handle messages with a MQRFH2 and MQPCF headers.

Use gmo.options = … + MQConstants.MQGMO_PROPERTIES_IN_HANDLE to convert the RFH2 to message properties.

Process the properties.

Enumeration props = null;
props = msg.getPropertyNames("%");
if (props != null) {
 while (props.hasMoreElements()) {
   String propName = (String) props.nextElement();
   Object propObject = msg.getObjectProperty(propName);
   System.out.println(propName, propObject.toString() );
}

There is an MQHeaderList class, and iterator, but these did not seem very reliable. Even after I deleted a the MQRFH2 header, it still existed, and the PCF classes did not work.

Make it a PCF’able message

PCFMessage message = new PCFMessage(msg);

Display all of the elements in the PCF message

Enumeration<?> e = message.getParameters();
while( e.hasMoreElements()) {
   PCFParameter p = (PCFParameter)e.nextElement();
   if (p.getType() == MQConstants.MQCFT_GROUP )doGroup((MQCFGR) p);
   else {
     int parm   p.getParameter());  // number
     String tp = p.getParameterName();  // string
     String value = p.getStringValue() );  // values
     System.out.println(tp +":" + value); 
   }
}

Or you can get the specific fields from the message, for example message.getStringParameterValue(MQConstants.MQCAMO_START_DATE);

Process a group of elements

void doGroup(MQCFGR sgroup)
{
  Enumeration<?> f = sgroup.getParameters();
  while(f.hasMoreElements()){
     PCFParameter q = (PCFParameter)f.nextElement();
     if (p.getType() == MQConstants.MQCFT_GROUP )doGroup((MQCFGR) q);
     else System.out.println("q.getParameterName() +":" + q.getStringValue() );
}

And use similar logic for processing the fields as before.

Processing lists

Many elements have a list of items;  a list of integers, or  a list of 64 bit integers(long).   The array has an element:0 for Non Persistent and element:1 for Persistent;

I found it easiest to convert integers to longs, so I had to write fewer methods.

long[] longs = {0,0};

if (q.getType() == MQConstants.MQCFT_INTEGER64_LIST)
{ 
  MQCFIL64 i64l = ( MQCFIL64) q;
  longs = i64l.getValues(); 
}
else if (q.getType() == MQConstants.MQCFT_INTEGER_LIST)
{ 
  MQCFIL il = ( MQCFIL) q;
  int[] ii = il.getValues();
  longs = new long[ii.length];  // should be if size 2
  for (int iil = 0;iil < ii.length;iil++)
    longs[iil]= ii[iil];
}

Providing summary data

With fields like the number of puts, the data is an array of two values, Non Persistent, and Persistent.

It makes later processing much easier if you create

System.out.println("Puts_Total:" + (longs[0]  + longs[1]);
System.out.println("Puts_NP:" + long[0]); 
System.out.println("Puts_P:"  + long[1]);

There are verbs that produce data for Non Persistent and Persistent messages BROWSE, GET, PUT, PUT, PUT1 and different categories; number of requests, the maximum message size, the minimum message size, and the total bytes put or got.

MQIAMO_xxxs, MQIAMO_xxx_MAX_BYTES , MQIAMO_xxx_MIN_BYTES and MQIAMO64_xxx_BYTES

For MAX values the calculation is easy

REAL_GET_MAX_BYTES =max(longs[0],long[1]);

For minimum the calculation is more tricky, you need to consider if any messages were put with that persistence.  For example

If you have PUT_MIN_BYTES = [50,0] and PUTS[1,0] then the minimum size is 50.
If you have PUT_MIN_BYTES = [50,0] and PUTS[1,1] then the minimum size is 0.

public long P_NP_MIN( int[] values,int[]count ) {
  long min;
  if ( count[0] > 0 && count[1]> 0 )
  {
    if (value[1]< value[0]) min = value[1];
    else min = value[0];
  }
  else if (count[0] == 0 ) 
     min = value[1];
  else // count[0] > 0 and count[1] = 0
     min = value[0];
  return min;
}

then use it

long REAL_GET_MIN = P_NP_MIN(message.getIntListParameterValue(MQConstants. MQIAMO_GET_MIN_BYTES),
                             message.getIntListParameterValue(MQConstants. MQIAMO_GETS)
                           )      

Convert date and time into DateTime;

In some records, such as statistics and accounting, you have start dates and time, and end date and time.
It is useful to calculate the duration, and provide a dateTime value for tools like Splunk and ElasticSearch.
For example, fields MQConstants.MQCAMO_START_DATE and MQConstants.MQCAMO_START_TIME.

Some dates have “.” in the date, and sometimes it is “:”‘.

long doDateTime(PCFMessage m , 
               String label, 
               int date,  // eg MQConstants.MQCAMO_START_DATE 
               int time) // eg MQConstants.MQCAMO_START_TIME
  {
  Calendar cal = Calendar.getInstance();
  SimpleDateFormat sdfParse = new SimpleDateFormat("YYYY-MM-ddHH.mm.ss", Locale.ENGLISH);
  SimpleDateFormat sdfFormat = new SimpleDateFormat("YYYY-MM-dd'T'HH:mm:ss'Z'", Locale.ENGLISH);
  long timeMS = 0;
  try {
     // get the values from the message
     String startDate = m.getStringParameterValue(date);
     String startTime = m.getStringParameterValue(time);
     startTime = startTime.replace(":",".") ; // in case some times have ":" in the value
     String dateTime = startDate +startTime;  // make the dateTime value
     cal.setTime(sdfParse.parse(dateTime));   // Parse the combined value
     Date dateTimeVal = cal.getTime(); 
     System.out.println(label+":" + sdfFormat.format(dateTimeVal)); // print formatted time
     timeMS = dateTimeVal.getTime(); // number of milliseconds
    } catch (PCFException e) {
	// TODO Auto-generated catch block
	e.printStackTrace();
   } catch (ParseException e) {
    // TODO Auto-generated catch block
    e.printStackTrace();
   }
   return timeMS;
}

You need to substring the first 8 characters of the ActivityTrace OperationDate, as it is padded with nulls.

Some times are in format HH.mm.ss, and so the SimpleDateFormat has ‘.’ in the time parsing statement.  The ActivityTrace OperationTime has HH:mm:ss, so  the doDateTime routine  uses ..replace(“:”,”.”) .

Calculating the duration of statistics and accounting records.

You can then do

long startTime =
doDateTime(m,"StartDateTime",MQConstants.MQCAMO_START_DATE,
                             MQConstants.MQCAMO_START_TIME);
long endTime =
doDateTime(m,"EndDateTime",MQConstants.MQCAMO_END_DATE, 
                           MQConstants.MQCAMO_END_TIME);
long durationInMilliseconds = endTime-startTime;
if (durationInMilliseconds == 0) durationInMilliseconds = 1;

And use this to calculate rates, such as messages put per second, as part of the output of the program.

Doing calculations on the data

If you are iterating over the parameters, you need to be careful about data from multiple parameters.

For example within a queue accounting record you get fields

MQIAMO_OPENS
MQCAMO_OPEN_DATE
MQCAMO_OPEN_TIME
MQIAMO_CLOSES
MQCAMO_CLOSE_DATE
MQCAMO_CLOSE_TIME
MQIAMO_PUTS
MQIAMO_PUTS_FAILED
MQIAMO_PUT1S
MQIAMO_PUT1S_FAILED
MQIAMO64_PUT_BYTES
MQIAMO_PUT_MIN_BYTES
MQIAMO_PUT_MAX_BYTES

So if you want to display the number of puts failed as a percentage of all puts you need logic like

switch (p.getParameter()) {
  case MQConstants.MQIAMO_PUTS:
   puts = (int[] ) p.getValue();
   break;

case MQConstants.MQIAMO_PUTS_FAILED:
   int PutsfailedPC =  100 * (int) p.getValue()/ (put[0]+puts[1]);
break;

The values in a PCF message could in theory be in any order, though I expect a defect would be raised if the order changed.
To be bullet proof – you could iterate over the elements, and store them in a hashmap, then check the fields you want to use exist, then extract the values and use them.

Special cases…

I hit some “special cases” – what I would call bugs, in some of the processing.

Many strings have trailing nulls instead of trailing blanks (as documented), so you should to trim nulls off every string value.  Here is some code from Gwydion

String trimTrailingNulls(String s) {
  int stringLength = s.length();
  int i;
  for (i = stringLength; i > 0; i--) {
     if (s.charAt(i - 1) != (char) 0) {
       break;
     }
  }
  if (i != stringLength) {
    return s.substring(0, i);
  }
  return s;
}

Of course in C, printing a string with a trailing null is not a problem.

 

When I’ve got my code tested, and handled all of the strange data I’ll put my program up on github.

Midrange accounting and stats are sometimes pretty useless

I was looking into a problem between two queue managers, and I thought I would use the stats and accounting to see what was going on.   These were pretty well useless.  This post should be subtitled, “why you need connection pooling in your web server,  for all the wrong reasons“.

As I walked along this morning I pictured a man in a big chair stroking a white car saying “For you Mr Bond, you will not know which applications used MQ, and you will not know what they did.  For you, your struggle to understand what is going on, is over”.

Which queue were used?

The first problem was that the stats and accounting only display the mapped queue name, not the opened name.

I had two cluster queues QC1 and QC2.  I opened them and put a few messages to each queue.  These two queues map down to the SYSTEM.CLUSTER.TRANSMIT.QUEUE – or what ever has been configured for the cluster.

In the accounting data, I had “Queue_Name” :SYSTEM.CLUSTER.TRANSMIT.QUEUE  opened twice.  There was no entry for QC1 nor for QC2.

It works the same way  if you have a QALIAS.  If you have queue QA pointing to QL.  When you open QA, the accounting record is for QL.

This applies to the statistics records as well.

Which transaction was it?

I had the transaction running as an MDB in my web server.

For my application logic in the onMessage method,  which processed the passed message and put the reply, I had

,”Appl_Name” :CF3Name  – this is the connection pool name used for putting the message
,”Process_Id” :9987
,”Thread_Id” :42
,”User_Identifier” :colinpaice
“Queue_Name” :SYSTEM.CLUSTER.TRANSMIT.QUEUE

For the listener side of the application,  I had

,”Appl_Name” :weblogic.Server
,”Process_Id” :9987
,”Thread_Id” :43
,”User_Identifier” :colinpaice
“Queue_Name” :JMSQIN

Where JMSQIN is the name of the queue the MDB was listening on.

They are both for the same process id (9987) but they are on different threads.

I could only find out what was running from the name of the input queue.

I could not easily tie the two lots of accounting data together.  Fortunately I had a dedicated connection pool for the MDB, so I could find it that way.  If I had different MDBs using the same connection pool, I would not be able to tell which MDB did which requests.

In someways this is expected.  If the listener is using a connection pool – there could be different MDBs serially using the same thread in the  connection pool, so it cannot give the name of the MDB, as the MDB name will change.

So to help you understand what is happening in your MQ applications in a web server, you should use connection pooling so you can identify which application issued the requests.  Connect pooling also gives you performance benefits.