I used the midrange MQ activity trace to show what my simple application, putting a request to a cluster server queue and getting the reply, was doing. As a proof of concept (200 lines of Python), I produced the following
This output is a .png format. You can create it as an HTML image, and have the nodes and links as clickable html links.
Ive ignored any SYSTEM.* queues, so the SYSTEM.CLUSTER.TRANSMIT.QUEUE does not appear.
The red arrows show the “high level” flow between queue managers at the “architectural”, hand waving level.
- The application oemput on QMA did a put to a clustered queue CSERVER, there is an instance of the queue on QMB and QMC. There is a red line from QMA.oemput to the queue CSERVER on QMB and QMC
- The server programs, server running on QMB and QMC put the reply message to queue CP0000 on queue manager A
The blue arrows show puts to the application specified queue name – even though this may map to the S.C.T.Q. There are two blue lines from QMA.oemput because one message went to QMC.CSERVER, and another went to QMB.CSERVER
The yellow lines show the path the message took between queue managers. The message was put by QMA.oemput to queue CSERVER; under the covers this was put to the SCTQ. From the accounting trace record this shows the remote queue manager and queue name: the the yellow line links them.
The black line is getting from the local queue
The green line is the get from the underlying queue. So if I had a reply queue called CP0000, with a QAlias of QAREPLY. If the application does a get from QAREPLY, There would be a black line to CP0000, and a green line to QAREPLY
How did I get this?
I used the midrange activity trace.
On QMA I had in mqat.ini
applicationTrace: ApplClass=USER # Application type ApplName=oemp* # Application name (may be wildcarded) Trace=ON # Activity trace switch for application ActivityInterval=30 # Time interval between trace messages ActivityCount=10 # Number of operations between trace msgs TraceLevel=MEDIUM # Amount of data traced for each operation TraceMessageData=0 # Amount of message data traced
I turned on the activity trace using the runmqsc command
ALTER QMGR ACTVTRC(ON)
I ran some work load, and turned the trace off few seconds later.
I processed the trace data into a json file using
/opt/mqm/samp/bin/amqsevt -m QMA -q SYSTEM.ADMIN.TRACE.ACTIVITY.QUEUE -w 1 -o json > aa.json
I captured the trace on QMB, then on QMC, so I had three files aa.json, bb.json, cc.json. Although I captured these at different times, I could have collected them all at the same time.
jq is a “sed” like processor for processing json data. I used it to process these json files and produce one output file which the Python json support can handle.
jq . --slurp aa.json bb.json cc.json > all.json
The two small python files are zipped here. AT.
I used ATJson.py python script to process the all.json file and extract out key data in the following format:
- server, the name of the application program
- COLIN, the channel name, or “Local”
- 127.0.0.1, the IP address, or “Local”
- QMC, on this queue manager
- Put1, the verb
- CP0000, the name of the object used by the application
- SYSTEM.CLUSTER.TRANSMIT.QUEUE, the queue actually used, under the covers
- QMC, which queue manager is the SCTQ on
- CP0000, the destination (remote) queue name
- QMA, the destination queue manager
- 400 the number of times this was used, so 400 puts to this queue.
I had another python program Process.py which took this table and used python graphviz to draw the graph of the contents. This produces a file with DOT (graph descriptor language)parameters, and used one of the many programs to draw the chart.
This shows you what can be done, it is not a turn-key solution, but I am willing to spend a bit of time making it easier to use, so you can automate it. If so please send me your Activity Trace data, and I’ll see what I can do.
One thought on “Using Activity Trace to show a picture of which queues and queue managers your application used.”