The fog around Message Driven Beans (MDBs)

While playing with Message Driven Beans and looking at performance problems, I have learned a lot about MDBs and tuning.   The documentation was only partly useful.   There is a lot of documentation  – some concepts are explained 1000 times, but other areas are undocumented – or just wrong.  I’ll try to fill the holes in this blog post.  If I am wrong – please tell me!

This blog post is just one step on the path to how to configure webLogic server to avoid millions of connects a day, and to see if you need to do any tuning ( from an MQ perspective)

I’ll cover

  1. How MDBs work
  2. Different types of Enterprise Java bean standards, and how configuration of EJBs have changed over the years.
  3. Use of resource adapters.

Message listener

We should all be familiar with an application that does an MQGET, a database update, and a commit – which commits both units of work.  A transaction manager manages the units of work with each resource manager (MQ, and database).

The same concept applies to the MDB listener running in a web server.  The listener gets a message from one queue manager, and calls the MDB on a different thread, with the message content.  The MDB connects to a (possibly different) queue manager puts the reply and returns. The listener does a commit. The web server acting as a transaction manager coordinates the commit in both queue managers.  If the application does a roll back – the whole request can be backed out (if the MDB is set up for this).

You can configured the listener tasks so the gets are in syncpoint or not, and the MDB can do its processing in sycnpoint or not.

There are two distinct parts to the MDB.

The listener task described above, and the application code.   This application code does not use the same MQ connection as the listener code.   This may be obvious to most people, but I found documents on the web discussing performance tuning, which said create a connection pool for the listener side of the MDB (which may connect once a day and so a connection pool may not be needed) and did not mention a connection pool for the application side (which does 1 million puts, causing 1 million connects a day and so needs a connection pool!).

The Enterprise Java Bean architecture.

From the dawn of web servers, the architecture has followed the classic pattern.

  1. This is great – every one develops their own way of doing things.  Let each web server manufacturer have their own way of doing this.    Let JMS be different to database access.
  2. The realization that this is not a good idea and we need standards and commonality as there is basically one problem.
  3. Now we have standards and people going along the road together, we find there are improvements we need to make.  It could also be that there is a new leader who says “let us go in a different direction – the grass is greener over here”.  You have to maintain the old way of doing things, and also swallow the new way of doing things.

How does this impact MDBs?

EJB 2.0

With EJB 2. MDBs you have information stored in a JNDI repository.   This could be file based on your local machine, or in LDAP.  In this repository you have

  • a queue resource MYJMSQ and information about it, the MQ Queue name is SERVER, and other parameters.
  • connection information MYCONN, this contains the channel name, host name, port etc.

You configure your MDB with

  1. The name of the class (your program) to use, and the method to be passed messages (this is usually onMessage)
  2. The location of the JNDI repository
  3. What sort of repository it is (what Java class you need to communicate with it)
  4. What connection to use
  5. What queue to use.

The ejb-jar.xml file has “standard” information including the class name  and the method name.
Each web server has their own way of passing in the other information.  For example webLogic has a file weblogic-ejb-jar.xml with information about the location of the JNDI repository, the JNDI connection (MYCONN), and JNDI queue name(MYJMSQ).   It also has information about the connection pool for the listener.  All these files are packaged into the .jar file for the MDB.

IBM’s Liberty web server stores the information elsewhere.

The application has code like

ctx = new InitialContext();
cf = (ConnectionFactory)ctx.lookup(“CF1”);

which says look in the same JNDI repository, for connection CF1.   This could be same connection name as the listener is using, it could be different.  I could not find a way of specifying connection pool information for this connection, so by default every message put to MQ requires an MQCONN and MQDISC.

The application can get the destination name of the replyTo Queue, and uses it

Destination dest = message.getJMSReplyTo();
producer = session.createProducer(dest);
producer.send(response);

The Destination dest will have a value like queue://QMA/QUEUENAME

EJB 3.0

With EJB version 3, a lot has changed.

  1. There is a standard Java Connector Adapter( JCA) I think version 1.5 is current.
  2. The hard work is now done by using a resource adapter (ra)
  3. This has a different way of passing configuration information
  4. You can now hard code data in your program (which seems a step backwards to me)

Key information is stored in activationSpecs.  You can code it in your java application program.   (The resource adapter can query all the activationConfigProperties from a class and then process it), or have it in the .xml configuration files.

For example with the information in your application.java file.

@MessageDriven(
  name = "JMSSampleMDB",
  activationConfig = {
    @ActivationConfigProperty(propertyName  = "destinationType", 
               propertyValue = "javax.jms.Queue"),
     @ActivationConfigProperty(propertyName  = "destination", 
	       propertyValue = "INPUT_Q")                         
  }
)
public class JMSSampleMDB implements MessageListener{
 @TransactionAttribute(value = TransactionAttributeType.REQUIRED)
  public void onMessage(Message message) {...
	 }
}

It can be stored in the ejb-jar.xml file

<activation-config>
  <activation-config-property> 
    <activation-config-property-name>connectionfactory</activation-config-property-name> 
    <activation-config-property-value>CF3</activation-config-property-value>
  </activation-config-property>
  <activation-config-property> 
    <activation-config-property-name>destination</activation-config-property-name> 
    <activation-config-property-value>JMSQ2</activation-config-property-value>
  </activation-config-property>
</activation-config>

The parameters CF3 and JMSQ2 are configured as part of the resource adapter, and not in JNDI.  Each resource adapter has an ra.xml file.  Weblogic stores its specific information in weblogic-ra.xml.  IBM Liberty stores it in the server.xml file.

Once you have installed the resource adapter you should then be able to add additional connections and queues to it.

The resoure adapter takes the definitions and puts them into the JNDI tree within the web server.   Your application can continue to use

ctx = new InitialContext();
cf = (ConnectionFactory)ctx.lookup(“CF1”);

Using this way you can configure your connection as part of a connection pool- and avoid the MQCONN and MQDISC request.

I believe you can define an MDB using definitions in JNDI – and the application part of the MDB can use the resource adapter.

As far as I have been able to find out – if you want to use connection pooling for your application program you need to use a resource adapter.  I’ll blog this in a separate topic.  It is complex and quirky, and will need a lot of description

 

Java whoops causes performance problem.

I had not realized how easy it is to write a bad MQ Java program. When I worked for IBM, I had seen java applications doing 1 million MQCONNs a day – and people complained because it was slow! I accidentally managed to recreate this when looking into another problem.

I’ll show the bad programming example – which may be typical of many applications, and I’ll show what MQ requests the various JMS methods do, and how you can tell if your application is misbehaving.

My bad java program

If I had a java program which does the following

public class ColinDemo{
  public static void doit(){
    while ( i< 100)
    {
      colinMQ(...)
    }
  } 
  colinMQ() { 
     MQQueueManager queueManager = new MQQueueManager("qMgrName");
  ...  
     queueManager.close()
  }
}

Every time colinMQ is entered it creates a new connection to MQ.   When queueManager.close() is executed, there will be an MQDISC

The program as written will cause 100 MQCONNs and 100 MQDISC… whoops.

QueueManager needs to be defined as a class variable for example

public class ColinDemo{
MQQueueManager queueManager = Null;

Then colinMQ() needs a check like
if queueManager == Null then
queueManager = new MQQueueManager(“qMgrName”);
or

public class ColinDemo{
  public static void doit(){
    MQQueueManager queueManager;
    queueManager = initColinMQ(...)
    while ( i< 100)
    {
       colinMQ(queueManager...)
    }
    closeqm(queueManager);
  }
  
  MQQueueManager initColinMQ(...){
  return MQQueueManager("qMgrName")
  } 
  colinMQ(qm... ) { 
  ...
  }
  closeqm(qm)
  {
    qm.close();
  }
}

That is all pretty basic stuff, which is obvious once you see it. The more subtle bug is when an application calls ColinDemo.doit() multiple times. On exit from ColinDemo queueManager  will be closed and will cause it do do an MQDISC, the next call to ColinDemo.doit() will require another MQCONN.

You need to have ColinDemo return instance data. For example in your highest level program you say

ColinDemo cd = new ColinDemo(“qMgrName”)

then you can use cd.doit()… instead of ColinDemo.doit(), and of course you need a cd.close() to close the connection once it has been finished with.

When you are using JMS you need to have
connection = cf.createConnection();
session = connection.createSession(…);
consumer = session.createConsumer(destination);
defined as instance variables so they are set when the thread is started in the morning, and are not changed or de-allocated at the end of the day when the high level program ends.

For example in your top level JMS program

ColinDemo cd = new ColinDemo(“MYQMGR”);
for (i=0 ;i <100;i++) 
{
Message m = cd.getMessage() 
}

with

public class ColinDemo{
  private Connection connection = null;
  private Session session = null;
  private Destination destination = null;
  private MessageConsumer consumer = null;
  private Context context = null;
  private JmsConnectionFactory cf = null;
  private String contextFactory = null; 
  private Hashtable<String, String> environment;

  ColinDemo(String QMGRNAME)
  {
  // Instantiate the initial context
  contextFactory = "com.sun.jndi.fscontext.RefFSContextFactory";
  environment = new Hashtable<String, String>();

  environment.put(Context.INITIAL_CONTEXT_FACTORY,contextFactory);
  environment.put(Context.PROVIDER_URL, initialContextUrl);
  context = new InitialDirContext(environment);
  cf = (JmsConnectionFactory)
  context.lookup(connectionFactoryFromJndi);
  // Lookup the destination
  destination = (JmsDestination) 
  context.lookup(destinationFromJndi);

  connection = cf.createConnection();
  session = connection.createSession(false,Session.AUTO_ACKNOWLEDGE);
  consumer = session.createConsumer(destination);

  // Start the connection
  connection.start();
  }

  public Message getMessage()
  { 
    Message message = consumer.receive(timeout);
  } 
}

What goes on under the covers.

A basic JMS program looks like

String contextFactory = "com.sun.jndi.fscontext.RefFSContextFactory";
Hashtable<String, String> environment = new Hashtable<String, String>();
environment.put(Context.INITIAL_CONTEXT_FACTORY, contextFactory);
environment.put(Context.PROVIDER_URL, initialContextUrl);
context = new InitialDirContext(environment);
cf = (JmsConnectionFactory) context.lookup(connectionFactoryFromJndi);

// Lookup the destination
destination = (JmsDestination) context.lookup(destinationFromJndi);

connection = cf.createConnection();
session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
consumer = session.createConsumer(destination);

// Start the connection
connection.start();
// get a message 
Message message = consumer.receive(timeout);

Using the distributed MQ activity trace we can see the various MQ verbs which are issued.
The program started at 09:00:30 and ended at 09:00:45

connection = cf.createConnection();
has the following accounting trace records
Tid Date       Time     Operation  CompCode MQRC    HObj (ObjName) 
001 2019-07-26 09:00:30 MQXF_CONNX MQCC_OK  0000    -
001 2019-07-26 09:00:30 MQXF_OPEN  MQCC_OK  0000    2    ( ) 
001 2019-07-26 09:00:30 MQXF_INQ   MQCC_OK  0000    2    ( )
001 2019-07-26 09:00:30 MQXF_CLOSE MQCC_OK  0000    2    ( )
001 2019-07-26 09:00:45 MQXF_DISC  MQCC_OK  0000    -

We can see the transaction starting at 09:00:30 and ending at 09:00:45.

This is querying the queue manager for

MQIA_COMMAND_LEVEL (31)
MQIA_PLATFORM (32)
MQIA_CODED_CHAR_SET_ID (2)
MQCA_Q_MGR_NAME (2015)
MQCA_Q_MGR_IDENTIFIER (2032)
MQIA_ADVANCED_CAPABILITY (273)
session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);

produces the following trace records

Tid Date       Time     Operation  CompCode MQRC    HObj (ObjName) 
001 2019-07-26 09:00:30 MQXF_CONNX MQCC_OK  0000 -
001 2019-07-26 09:00:45 MQXF_DISC  MQCC_OK  0000 -

consumer = session.createConsumer(destination);

adds the following to the session data.

Tid Date       Time     Operation  CompCode MQRC HObj (ObjName) 
001 2019-07-26 09:00:30 MQXF_OPEN  MQCC_OK  0000 2    (Q1 ) 
001 2019-07-26 09:00:45 MQXF_CLOSE MQCC_OK  0000 2    (Q1 )

connection.start();

adds nothing

Message message = consumer.receive(timeout);

adds the following tace data to the session data.

Tid Date       Time     Operation CompCode MQRC HObj (ObjName)
001 2019-07-26 09:00:30 MQXF_GET  MQCC_OK  0000 2    (Q1 )

So by having
connection = cf.createConnection();
session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
consumer = session.createConsumer(destination);

Issued just once, you will save many MQCONN, MQDISC, MQOPEN, MQCLOSE and MQINQ calls.
If you see a pattern of

Tid Date       Time     Operation  CompCode MQRC HObj (ObjName) 
001 2019-07-26 09:00:30 MQXF_CONNX MQCC_OK  0000 -
001 2019-07-26 09:00:30 MQXF_OPEN  MQCC_OK  0000 2    ( ) 
001 2019-07-26 09:00:30 MQXF_INQ   MQCC_OK  0000 2    ( )
001 2019-07-26 09:00:30 MQXF_CLOSE MQCC_OK  0000 2    ( )
001 2019-07-26 09:00:45 MQXF_DISC  MQCC_OK  0000 -

You are doing too many connects – and need to look into it!

I am working on some other posts for making your Message Driven Bean applications running in a web server more efficient.

 

Thanks to Roger Lacroix for helping me get the java right.  His comments reminded me of the rules of computing

  • If you open it, close it
  • If you allocate it, free it
  • Clean up after you.
  • Be a good citizen

On the web there are some good example, and some bad examples.  One good example had

try{
    queueManager = new MQQueueManager("qMgrName");
    doWork(queueManager)    
    ...
   } catch (Exception je)
   {...
   }
finally{
    try{ queueManager.close()}
    catch (Exception je){...}
   }

When the execution left the try{} statement, the statement in the finally was always executed, so you could be sure that the queueManager object will always closed.

Other examples did not have code to close the connection, and the connection was left open until the program terminated which could be months later!  Be a good citizen, free it when you do not need it.

 

 

Why is my java application doing multiple gets because the receive buffer is too small?

When investigating a MQ JMS performance problem, I noticed that there was a pattern of

MQGET – 2080 (0820) (RC2080): MQRC_TRUNCATED_MSG_FAILED

followed by an MQGET with a bigger buffer which worked.

Being a performance person these double gets did not look right to me.

I spoke to Paul Titheridge of IBM who said this was fixed back in MQ V6 with a java property

com.ibm.mq.jmqi.defaultMaxMsgSize

If you specify

 -Dcom.ibm.mq.jmqi.defaultMaxMsgSize=8192
in the Java start-up you will get an 8KB buffer instead of the default 4KB.
Easy when you know how!

Showing what the C client is doing and how long it took.

I was asked to get involved in a dispute between the application team, and the MQ team about a slow C application running as a client. The application team felt it was MQ, the MQ team felt it was somewhere else. We had lots of feelings but no facts. I took a client trace, post processed it, and could point the finger of blame. I then spent more time digging into the trace and could see where the time is being spent.

Collecting a trace

We used strmqtrc -e -d 0 to turn on the trace. -d 0 says collect 0 length data buffers.
We ran the application and used endmqtrc -e to stop the trace
As we were on Linux we had to use the dspmqtrc *.TRC to format the trace. You may have lots of file on /var/mqm/trace. The trace of the client will have the userid associated with the application. Files with userid mqm come from the queue manager.

If you look at the trace file there is so much data is it temping to give up.
At the top of the file is a head and includes a line “Program Name :- …” so you can check this is the correct file.

The trace data looks like
20:41:03.511815 17291.5      :      { MQGET
20:41:03.511819 17291.5      :       -{ reqGetConn
20:41:03.511822 17291.5 RSESS:000007 -} reqGetConn rc=OK FunctionTime=3
20:41:03.511826 17291.5 RSESS:000007{ MQGET
  • 20:41:03.511815 is a time stamp
  • 17291.5 is processid.threadid
  • the 17291 is the same as the file name AMQ17291.0.FMT
  • {, -{,–{ shows the nesting level of the function so { is the top level, -{ is the next level deeper etc

Using the command
grep -e ” {” -e ” }” *291*.FMT|less
extracts just high level MQ API calls with the output from different threads intermixed

20:41:03.637143 17291.8 : } MQPUT rc=OK FunctionTime=874
20:41:03.637146 17291.8 : { MQGET
20:41:03.641930 17291.6 : } MQGET rc=OK FunctionTime=4784
20:41:03.641949 17291.6 : { MQPUT

If you look in the trace file near the entry times you will see the MQ control blocks, such as the MQMD, and MQOD so you can identify which queues are being used etc..

grep -e ” {” -e ” }” *291*.FMT| grep 17291.8 |less
shows you the high level data in the trace file for just the thread 8.

(1) 20:41:06.069828 17291.8 : { MQGET
(2) 20:41:06.105633 17291.8 : } MQGET rc=OK FunctionTime=6784
(3) 20:41:06.105656 17291.8 : { MQPUT
(4) 20:41:06.139679 17291.8 : } MQPUT rc=OK FunctionTime=482
(5) 20:41:06.953967 17291.8 : { MQGET

Note. The FunctionTime is often mis calculated and should be ignored.

  • The first MQGET took (2) – (1) 20:41:06.105633 – 20:41:06.069828 = 0.035806 seconds.
  • After the get to the following MQPUT it took (3) – (2) 20:41:06.105656 – 20:41:06.105633 = 0.000023 seconds.
  • The MQPUT took (4)-(3) 20:41:06.139679 – 20:41:06.105656 = 0.034023 seconds
  • After the MQPUT to the following MQGET (5) – (4) it took 20:41:06.953967 – 20:41:06.139679 = 0.814288 seconds

In summary the MQ requests took about 35 milliseconds, between the get and the put was 0.023 milliseconds, but between the put and the following get it took 814 milliseconds. This shows that most of the time was outside of MQ.

This was enough evidence to show the applications people to look else where.

Script to process the trace file and pull out the times in send and waiting for data

It is hard work extract the time stamps and calculating the time differences, so I wrote a simple Python script to do this for me.

This script gave me

(1) 20:41:04.400034 17291.6 : { MQPUT 0.000017
(2) 20:41:04.404946 17291.6 : } MQPUT 0.004912 sendTime 0.000960 waitTIme 0.003281
(3) 20:41:04.404949 17291.6 : { MQGET 0.000003
(4) 20:41:04.416523 17291.6 : } MQGET 0.011574 sendTime 0.000050 waitTIme 0.011167

(2) shows that the MQPUT took 0.004912 seconds, of which 0.000960 seconds was spent doing TCP send requests, and 0.003281 seconds was spent waiting for the request to be executed, and the response sent back. The message was large, so there were multiple send requests.
(3) shows the time between the end of the previous MQPUT, and the MQGET request was 0.000003 seconds.
(4) show the time of the MQGET . Overall it took 0.011574 seconds, of which 0.000050 was doing the send (there was only one send request), and 0.011167 seconds was spent waiting for the request to be executed, and the response sent back.

Digging deeper to see what is going on under the covers

Having a performance background I thought I would dig a bit deeper into this.

The overall picture I have is of getting to the airport. Do I take a taxi or take the bus?
It is usually better for everyone if people take the bus, because it is more efficient overall, and can reduce the amount of traffic. The journey may be a little longer because the bus has to stop to set down and pick up people. If you need to minimise the journey time, I would take a taxi. It may mean I get their faster than the bus – but it may not, if the taxi is queued behind the bus and cannot overtake.

The basic flow of a channel is
1. The application issues an MQ request
2. 1 or more TCP send requests pass the MQ verb and any data (such as message payload for an MQPUT). No more than 64 KB is sent per send request, so big messages will require multiple sends.
3. The last request includes the “end of data” flag.
4. The request is processed
5. The response and any data is sent back, a thread does one or more TCP receives to get the response and any data (for example the message payload for an MQGET request)
6. When all the data has been received, the request returns back to the application

For many years the application sent the data and blocked waiting for the response. If you had an application server, each thread had its own connection. This could lead to a very large number of connections. This configuration is called sharecnv(0)

With shared conversations introduced in MQ V7 there is one TCP connection and threads within a process can use it. The number of threads that can share it is defined with sharecnv(n).
The logic is very similar
1. The application issues an MQ request
2. The application does lock:TCP send:unlock. Multiple threads can send data, so you may get sends for multiple threads intermingled; send for thread 1, send for thread 2, send for thread 1, send for thread 3 etc..
3. The last request for the application request says “end of data”
4. The application thread is suspended.
5. There is a “reply thread” which issues the TCP receive requests.
6. When all the data has been returned for an application, the application thread is woken up.
This has the advantage that there may be only one connection, for multiple threads in the process.  This can be important if you have a very large number of threads using MQ.

With sharecnv(0) the channel was blocked during the MQ request.
There are two (or more) threads, one for sending, and a reply thread, means that asynchronous put, and asynchronous get can be supported.

How does the performance compare?

Sending data….

The TCP send request just passes the data into TCP buffers, so is quick. The TCP protocol says that after a number of bytes have been sent, TCP will wait till it has had confirmation that the earlier bytes have been received at the remote end. So looking at the TCP level you may have send(fast) send(fast) send(fast) send(slow – waiting for the ack from the far end), send (fast) send(slow)

This is the same for sharecnv(0) and sharecnv(> 0).

If you had sharecnv(0), so no sharing of the connection, the channel would do multiple sends, then perhaps do a database update. You may have other threads in the process also doing multiple sends and then perhaps a database update. If you did a packet trace of the network link you might see data for connection 1, data for connection 2, data for connection 1, data for connection 3 – just as for the sharecnv>0 case, but these requests may be done in parallel – depending on the configuration

So for sharecnv(0) and sharecnv(>0) you should expect similar send data rates.

Receiving data

The channels with sharecnv(0) have a dedicated connection. With sharecnv(>1) there is one reply connection shared by the users. With a low to medium throughput this should give the similar throughput as multiple channels with sharecnv(0). Because of the way the low level TCP protocol works, I can see that sometimes a channel defined with sharecnv(>1) may be faster!

With high throughput data may be delayed getting onto the shared connection, whereas with the sharecnv(0) channels the data can be flowing over connections in parallel.

What value should I use in sharecnv(n)

  1. If you want to use the new asynchronous functions, n must be greater than 0.
  2.  If you want the maximum possible throughput and minimum response time – at the cost of a larger number of connections then use n = 1.
  3.  If your throughput requirements are lower, and the applications can tolerate a small increase in response time then a larger n value will mean you need fewer connections – and so your queue manager can support more application instances.

 

What shape is your application splat?

There are a limited set of architectural patterns for MQ. I was working with Niels Simanis on how to identify them, and how to see if they change over time. Niels suggested the application splat gave a good visualization. The data behind it could be used by machine learning to tell if the pattern changes.

What are some typical application patterns?

1. Classic front end application MQCONN.. MQPUT, MQGET, .. MQDISC
2. Classic front end application with optimised application, MQCONN, n * ( MQPUT, MQGET ).. MQDISC
3. Classic back end MQCONN n * ( MQGET, MQPUT to different queues), MQDISC
4. Sender channel, MQGET, SEND, MQGET, SEND, MQGET SEND , ( Puts and gets to SYSTEM.CHANNEL.SYCNQ)
5. Receiver channel, RECEIVE, MQPUT, RECEIVE MQPUT, ( Puts and gets to SYSTEM.CHANNEL.SYCNQ)
6. Trigger monitor MQCONN N * (MQGET with BROWSE)

Within this we can have persistent message, non persistent messages; big message and small messages.

Some poor patterns which waste resources.

1. Do no work, MQCONN, MQDISC
2. The polling application ,MQPUT, n * (MQGET wait for 10 milliseconds)
3. The careless application for every MQGET , return code buffer too small, allocate bigger buffer, MQGET

What is a splat?

From the MQ accounting data , or the application trace (on Midrange) you can extract the statistics on the various MQ verbs
1. MQCONN
2. MQPUT of persistent messages
3. MQPUT of non persistent messages
4. MQPUT1 of persistent messages
5. MQPUT1 of non persistent messages
6. Successful MQGETS of persistent messages
7. Successul MQGETS of non persisttent messages
8. Successful MQGET with browse
9. Number of gets with non zero return code
10. Number of queues used for putting. For example putting to a clustered queue
11. Number of queues used for getting
12. Big puts > n KB
13. Small puts <= n KB
14. Big gets > n KB
15. Small gets <= n KB
16. MQCTL
17. MQCB
18. MQCMIT
19. MQBACK
20. MQPUB
21. MQSUB

MQOPEN, MQCLOSE, MQINQ and MQSET could be added to the list to make it more complete.

You can take this data and create a radar or splat chart.

Splat

The graph looks like a paint ball has been splattered against a wall, or for the more refined, dropping a glass of red wine on a white carpet.

The data is normalized by dividing all the numbers by the sum of successful puts and gets (or 1 if there are no successful puts or gets).

What does this show us ?

We can make several observations from this chart

  1. It looks like there is a connect for every MQPUT – this gives scope for optimization
  2. Using MQPUT1 may be better than MQOPEN, MQPUT, MQCLOSE
  3. There is a high number of get errors.  In this case every message had RC 2080 – buffer too small.
  4. There were puts to multiple queues.   This was a clustered queue with two backends
  5. The size of the put message was > 4000 bytes.  The returned messages a mixture of sizes.  Some were > 4000 bytes, the remainder were below 4000 bytes.

With this splat graph it is easy to see if the applications are similar. For example
1. Are two business applications similar?
2. Has the shape of an application changed over time , either due to “improvements” or the workload changing.
3. You can have a “best practices profile” and see how different an application is from your the best practice profile.

 

Effects of application tuning

Splat2

The application was tuned to use non persistent messages (as persistence was not needed) and do more work within MQCONN… MQDISC – leading to a reduction in MQCONNs.