With mid-range MQ there are a couple of migration options:
- Upgrade the queue manager in place – if there are problems, restore from backup, and sort out the problems this restore may cause. You may want to do this is you have just the one queue manager.
- Upgrade the queue manager in place – if there are problems, leave it down until any problems can be resolved. This assumes that you are a good enterprise user and have other queue managers available to process the work.
- Create another queue manager, “next to it” (“side by side”) on the same operating system image. A better description might be “adding a new queue manager to our environment on an existing box, and deleting an old one at a later date” rather than “side by side migration”. You may already have a document to do this.
What do you need to do for in-place migration.
- Backup your queue manager see a discussion here
- Shut down the queue manager, letting all work end cleanly
- Either (see here)
- Delete the previous version of MQ, and install the new version, or better..
- Use Multi-install – so you have old and new versions available at the same time
- Switch to the new version (of the multi-install)
- Restart the queue manager
- Let work flow
- Make note of any changes you make to the configuration – for example alter qlocal… in case you need to restore from a backout, and re-apply the changes.
If you need to backout the migration and restore from the backout
You need to
- Make sure there are no threads in doubt
- Make sure all transmission queues are empty (so you do not overwrite messages when you restore from a backup)
- Make sure all transmission queues are empty ( so you do not overwrite messages when you restore from a backup)
- Offload messages from application queues – if you are lucky there will be no messages. Do not offload messages from the system queues.
- Shut down MQ
- Reset the MQ installation to the older version
- Restore from your backup see here
- Any MCA channels which have been used may have the wrong sequence numbers, and will need to be reset
- Load messages back onto the application queues
- Reapply any changes, such as alter QL…
In the situation where you have a problem, personally I think it would be easier to leave the queue manager down, rather than trying to restore it from a backup. You may want to offload any application messages first. Of course this is much easier if you have configured multiple queue managers, and leaving one queue manager shut down should not cause problems. Until any problems are fixed you cannot proceed with migrating other queue managers, and you may have the risk of lower availability because there is one server less.
What you need to do for side by side migration.
“Side by side” migration requires a new queue manager to be created, and work moved to the new queue manager
- If this is a cluster repository, you need to move it to another queue manager if only temporarily (otherwise you will get a new repository)
- You need a new queue manager name
- You need a new port number
- Create the queue manager
- You may want to alter qmgr SCHINIT (MANUAL) during the configuration so that you do not get client applications trying to connect to your new queue manager before you are ready.
- You need to backup all application object definitions, chlauths etc and reapply them to the new queue manager. Do not copy and restore the channels
- Apply these application objects to the new queue manager
- List the channels on the old system
- Create new channels – for example cluster receiver, with CONNNAME will need the updated port, and a new name
- You should be able to reuse any sender channels unchanged
- If you are using CCDT
- Define new client SVRCONN names (as a CCDT needs unique channel names)
- On the the queue manager which creates the CCDT, create new Client CLNTCONN channels. The queue manager needs unique names
- Send the updated CCDT to applications which use this queue managers, so they can use the new queue manager. Note: From IBM MQ Version 9.0, the CCDT can be hosted in a central location that is accessible through a URI, removing the need to individually update the CCDT for each deployed client. See here
- If you are using clustered queues, then cluster queues will be propagated automatically to the repository and to interested queue managers
- If you are not using clustering, you will need to create sender/receiver channels, and create the same on the queue managers they attach to
- Update automation to take notice of the queue managers
- Change monitoring to include this queue manager
- Change your backup procedures to back up the new queue manager files
- Change your configuration and deployment tools, so changes to the old queue manager are copied to the new queue manager as well.
- Configure all applications that use bindings mode, to add the new queue manager to the options. Restart these applications so they pick up the new configuration
- When you are ready use START CHINIT
- Alter the original queue manager to be qmgr SCHINIT (MANUAL), so when you restart the queue manager it does not start the chinit, and so channels will not workload.
- Note there is a strmqm -ns option. The doc says… This prevents any of the following processes from starting automatically when the queue manager starts:
- The channel initiator
- The command server
- This parameter also runs the queue manager as if the CONNAUTH attribute is blank, regardless of its current value. This allows unauthenticated access to the queue manager for locally bound applications; client applications cannot connect because there are no listeners. Administrative changes must be made by using runmqsc because the command server is not running.
- But you may not want to run unauthenticated.
- Stop the original queue manager, after a short time, all applications should disconnect, and reconnect to the new queue manager.
- Shut down the old queue manager, and restart it. With SCHINIT (MANUAL) it should get no channels running. Stop any listeners. If you have problems you can issue START CHINIT and START LSTR. After a day shut down the queue manager and leave it down – in case of emergency you can just restart it.
- After you have run successfully for a period you can delete the old queue manager.
- Remove it from any clusters before deleting it. The cluster repository will remember the queue manager and queues for a long period, then eventually delete them.
- Make the latest version of MQ the primary installation, and delete the old version
- Update the documentation
- Update your procedures – eg configuration automation
As I said at the beginning – an in-place migration looks much easier to do.