The lows (and occasional high) of managing MQ centrally.

While I was still at IBM, and since I retired from IBM I have been curious how people managed MQ in their enterprise systems.

  1. How do you deploy a change to a queue to 1000 queue managers, safely, accurately, by an authorised person, and by the way one queue manager was down when you tried to make the change?
  2. Are theses identical systems identical – or has someone gone in and made an emergency change on one system and left one parameter different?
  3. We have all of these naming standards – do we follow them? Did we specify encryption on all external channels?

At the bottom of this blog (more like a long essay) I show some very short Python scripts which

  • compare queue definitions and show you the differences between them.
  • check when queue attributes do not meet “corporate standards”
  • printing of data from the change-events queue, so you can see what people altered.
  • I also have scripts which display PCF data from events, stats etc. I need to clean them up, then I’ll publish them.

I think Python scripting will make systems management so much easier.

Strategic tools do not seem to deliver.

There seem to be many “strategic tools” to help you. These include Chef, Puppet, Ansible, and Salt which are meant to help you deploy to your enterprise

There is a lot of comparison documents on the web – some observations in no particular order

  • Chef and Puppet have an agent on each machine and seem complex to initially set up
  • Ansible does not use agents – it uses SSH command to access each machine
  • Some tools expect deployers to understand and configure in Ruby (so moving the complexity from knowing MQ to Ruby), others use YAML – a simple format.

This seems to be a reasonable comparison.

Stepping back from using these tools I did some work to investigate how I would build a deployment system from standard tools. I have not done it yet, but I thought I would document the journey so far.

Some systems management requirements

What I expect to be able to do in an enterprise MQ environment.

  • I have a team of MQ administrators. All have read only access to all queue managers. Some can only update test, some can update test and production.
  • I want to be able to easily add and remove people from role based groups, and not wait a month for someone to press a button to give them authority.
  • I want to save a copy of the object before, and after a change – for audit trail and Disaster Recovery.
  • The process needs to handle the case when a change does not work because, the queue manager is down, or the object is in use.
  • I want to be able to deploy a new MQ server – and have all of the objects created according to a template for that application.
  • I want to check enforce standards eg names, and values (do you really need a max queue depth of 999 999 999, and why is curdepth 999 999?).
  • I want to be able to process the event data and stats data produced by MQ and put them in SPLUNK or other tool.
  • There are MQ object within the queue manager, and other objects such as CCDT tables for clients, and keystores TLS keys. I need to get these to wherever they are used.
  • I want to report statistics on MQ in my enterprise tool – so I need to get the statistics data from each machine to the central reporting tool
  • I want Test to look like Production (and use the same processes) so we avoid the problem of not testing what was deployed.

Areas I looked at

Docker is a tool designed to make it easier to create, deploy, and run applications by using containers. Containers allow a developer to package up an application with all of the parts it needs, such as libraries and other dependencies, and ship it all out as one package.

  • This may be fine for a test environment, but once deployed I still want to be able to change object attributes on a subset of the queue managers. I don’t think Docker solves this problem (it is hard to tell from the documentation).
  • I could not see how to set up the Client Channel Definition Tables (CCDT) so my client applications can connect to the appropriate production queue manager.
  • If I define my queues using clustering, when I add a new queue manager, the objects will be added to the repository cache. When I remove a queue manager from the cluster, and delete the container, the object live on in the cache for many days. This does not feel clean.
  • I wondered if this was the right environment (virtualised) for running my production performance critical workload on. I could not easily find any reports on this.
  • How do I manage licenses for these machines and make sure we have enough licenses, and we are not illegally using the same licence for all machines.

Using RUNMQSC

At first running runmqsc locally seemed to be answer to many of the problems.

I could use secure FTP to get my configuration files down to the machine, logon to the machine, pipe the file into runmqsc, capture the output and ftp the file back to my central machine.

  • Having all the MQ administrators with a userid on each machine can be done using LDAP groups etc. so that works ok
  • To use a userid and password you specify runmqsc -u user_id qm. This then prompts for the password. If you are pipe your commands in, then you have to put your password as the first line of the piped input. This is not good practice, and I could not see a way of doing it without putting the password in the file in front of the definitions. (Perhaps a Linux guru can tell me)

Having to ftp the files to and from the machine was not very elegant, so I tried using runmqsc as a client (the -c option). At first this seemed to work, then I tried making it secure, and use an SSL channel. I could only get this to work when it used a channel with the same name as the queue manager name. (So to use queue manager QMB I needed an SSL channel called QMB. The documentation says you cannot use MQSERVER environment variable to set up an SSL channel). On my queue manager QMB channel was already in use. I redefined my channel and got this to work.

As you may expect, I fell over the CHLAUTH rules, but with help from some conference charts written by Morag, I got the CHLAUTH rules defined, so that I could allow people with the correct certificate to use the channel. I could then give the channel a userid with the correct authority for change or read access.

I had a little ponder on this, and thought that a more secure way would be to use SSL AND have a userid and password. If someone copied my keystore they would still need the password to connect to MQ, and so I use two factor authentication.

This is an OK is solution, but it does not go far enough. It is tough trying to parse the output from runmqsc (which is why PCF was invented).

Using mqscx

Someone told me about mqscx from mqgem software. If runmqsc is the kinder garden version , mqscx is the adult one. It does so much more and is well worth a look.

See here for a video and here for the web site.

How does it do against my list of requirements?

  • I can enter userid and password on the command line and also pipe a file in ✔
  • One column output ( if required) so I can write the output to a file, and it is then very easy to parse ✔
  • I can use ssl channels ✔
  • I can use my Client Channel Definition Table (CCDT) ✔

It also has colour, multi column, better use of screen area ( you can display much more on your screen) and its own scripting language.

You can get a try before you buy license.

I moved onto Python and runmqsc…

so I could try to do useful things with it.

Using runmqsc under python does not work very well.

I have found Python is a very good tool for systems management – see below for what I have done with it.

  • I tried using Python “subprocess” so I could write data down the stdin pipe, into runmqsc and capture the output from the data writen to stdout. This did not work. I think the runmqsc output is written to stdout, but not flushed, so the program waiting for the data does not get it, and you get a deadlock.
  • I tried using Python “pexpect”, but this did not work as I could send one command to stdin, but then stdin was closed, and I could not send more data.
  • Another challenge was parsing the output of runmqsc. After a couple of hours I managed to create a regular expression which parsed most of the output, but there were a few edge cases which needed more work, and I gave up on this.
  • PCF on its own is difficult to use.
  • I came across PyMqi – MQ for Python. This was great, I could issue PCF commands, and get responses back – and I can process event queues and statistics queues!

From this I think using PyMqi is great!  My next blog post will describe some of the amazing things you can do in Python with only a few lines of code!

One thought on “The lows (and occasional high) of managing MQ centrally.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s