We have the spectrum of giving every application having its own queue manager spread across Linux images, or having a big server with a few queue managers servicing all the applications.
The are good points and bad points within the range of share nothing, share all.
What do you need to consider when considering sharing of resources within the environment.
You may want to provide isolation
- For critical applications, so other applications cannot impact it (keep trouble out)
- To protect applications from a “misbehaving” application and to minimise the impact, (keep trouble in).
- For regulatory reasons
- For capacity reasons
- Disk IO response time and throughput.
- Amount of RAM needed
- Amount of virtual storage
- Number of of TCP ports
- Number of MQ connections
- Number of file connections in the Operating System
- For security. It is often easier to deny people access to an image, than to put all the controls in place within the image.
- You have more granularity at shutting down an image – fewer applications are impacted.
- Restart time may be shorter.
If your requirements don’t fit into the above, you should consider sharing resources.
The advantages of sharing
- Fewer environments to manage.
- Provisioning can be done with automation, but the large number of small images can be hard to manage.
- Monitoring is easier – you do not have so many systems to look at. (How big a screen do you need to have every system showing up on it ?)
- You have to do changes and ugprades less frequently
- Removing images tends to leave information behind – for example information about a deleted clustered queue manager stays in a full repository for many days.
- The operating system may be able to manage work better than a VM hypervisor “helping”.
- Fewer events and situations to manage.
- By having more work the costs can be reduced. For example a channel with 10 messages in a single batch uses much less resources than 10 batches of one message.
You may need to provide multiple queue managers for availability and resilience, but rather than provide two queue manager for applicationA, and two queue managers for applicationB, you may be able to have two queue managers shared, with each queue manager supporting applicationA and applicationB.
You can provide isolation within a queue manager by
- Having specific application queue names, for example the queues names start with the application prefix. You can then define a security profile (authrec) based on that prefix.
- You can use split cluster transmit queue (SCTQ) so clusters do not share the SYSTEM.CLUSTER.TRANSMIT.QUEUE – but have their own queue and their own channels.
You may think by providing multiple instances you are providing isolation. There can be interaction with others at all levels, queue manager, operating system, hypervisors, disk subsystem network controllers – you just do not see them.
I had to work on a problem where MQ in a virtualized distributed environment saw very long disk I/O response times. We could see this from an an MQ trace, and from the Linux performance data. The customer’s virtualization people said on average the response time was OK, so no issue. The people in charge of the Storage Area Network, said they could not see any problems. The customer solved this performance problem by making all messages non persistent – which solved the performance problem – but may have introduced other data problems! As my father used to say, the more moving parts, the more parts that can go wrong.