Using a VSAM file from another system.

I have been working with two levels of ADCD z/OS system, Z24C and Z25D. I want to be able to use VSAM files from the z24C level system on the z25D system.

With non VSAM files, it is easy. I can define an alias for a high level qualifier such as my userid COLIN which points to the user catalog with my data sets in it. It is a bit harder with VSAM files, especially where there is a file with the same name of both systems (such as CSF.CSFCKDS).

A VSAM PATH is an alias for VSAM files.

Conceptually the first step is

//IBMDEFP JOB 1,MSGCLASS=H 
//S1 EXEC PGM=IDCAMS
//SYSPRINT DD SYSOUT=*
//SYSIN DD *
DEF PATH -
(NAME(COLINQ.AUT420.AUT420D.CSI.Z24C) -
PATHENTRY( AUT420.AUT420D.CSI ) -
) -
CATALOG( USERCAT.Z24C.PRODS )
/*

File AUT420.AUT420D.CSI is in catalog USERCAT.Z24C.PRODS

The above JCL will create a name COLINQ.AUT420.AUT420D.CSI.Z24C which points to the file AUT420.AUT420D.CSI in catalog USERCAT.Z24C.PRODS. The entry COLINQ.AUT420.AUT420D.CSI.Z24C is put in the same catalog.

If you use ISPF 3.4 it will not find the dataset.

Create an alias for the High Level Qualifier

//IBMUSERT JOB 1,MSGCLASS=H 
//S1 EXEC PGM=IDCAMS,REGION=0M
//SYSPRINT DD SYSOUT=*
//SYSIN DD *
DEFINE ALIAS (NAME(COLINQ) RELATE('USERCAT.Z24C.PRODS'))
/*

The above JCL will create an alias COLINQ, and says to find any datasets beginning with COLINQ go and look in catalog USERCAT.Z24C.PRODS.

To import the catalog into the current system

//IBMIMPC JOB 1,MSGCLASS=H 
//S1 EXEC PGM=IDCAMS
//SYSPRINT DD SYSOUT=*
//CAT DD DISP=SHR,DSN=ADCD.LIB.JCL,VOL=SER=C4SYS1,
// UNIT=3390
//SYSPRINT DD SYSOUT=A
//SYSIN DD *
IMPORT -
OBJECTS -
((USERCAT.Z24C.PRODS -
VOLUME(C4SYS1) -
DEVICETYPE(3390))) -
CONNECT -
CATALOG(CATALOG.Z25D.MASTER)
/*
//

The above JCL says import the catalog USERCAT.Z24C.PRODS on volume(C4SYS1), type (3390) into the (master) catalog CATALOG.Z25D.MASTER.

If the system needs to find USERCAT.Z24C.PRODS, it has enough information to be able to find it.

What you actually do

Now that you understand the process, the process you should follow is

  • Import the catalog into the current system.
  • Define an High Level Qualifier alias to point to the catalog. I might pick COLIN4C ( for the z24C system).
  • Create a path using COLIN4C as the high level qualifier of the data set, for each VSAM file.

You should then be able to see your data set in ISPF 3.4

To access the Z24C /u ZFS files system on the Z25D system I used

 IMPORT - 
OBJECTS -
((CATALOG.Z24C.MASTER -
VOLUME(C4SYS1) -
DEVICETYPE(3390))) -
CONNECT -
CATALOG(CATALOG.Z25D.MASTER)

DEFINE ALIAS (NAME(Z24CMAST) RELATE('CATALOG.Z24C.MASTER'))

DEFINE PATH -
(NAME(Z24CMAST.ZFS.USERS ) -
PATHENTRY( ZFS.USERS )) -
CATALOG(CATALOG.Z24C.MASTER)

In Unix I created a directory

mkdir /u/old

The mounted the file system in ISPF option 6 TSO

mount filesystem('Z24CMAST.ZFS.USERS') mountpoint('/u/old') type(ZFS)  
mode(read)

I could then access the files from /u/old/…

How do I change SSLCIPH on a channel?

Regular readers of my blog know that most of the topics I write on appear simple, but have hidden depth, this topic is no exception.

The simple answer is

  • For the client ALTER CHL(xxxx) CHLTYPE(CLNTCONN) SSLCIPH(new value)
  • For the svrconn
    • ALTER CHL(xxxx) CHLTYPE(SVRCONN) SSLCIPH(new value)
    • REFRESH SECURITY

The complexity occurs when you have many clients trying to use to the channel, and you cannot change them all at the same time (imagine trying to change 1000 of them – when half of them are not under your control). For the clients that have not changed, you will get message

AMQ9631E: The CipherSpec negotiated during the SSL handshake does not match the required CipherSpec for channel ‘…’.

in the /qmgrs/xxxx/errors/AMQERR01.LOG

For this problem the CCDT is your friend. See my blog post here.

I have a client channel CHANNEL(C1) CHLTYPE(CLNTCONN)

On my CCDT queue manager I created another channel the same as the one I want to update.

DEF CHANNEL(C2) CHLTYPE(CLNTCONN) LIKE(C1)

On my server queue manager I used

DEF CHANNEL(C2) CHLTYPE(SVRCONN) LIKE(C1)

DEFINE CHLAUTH(C2) TYPE(BLOCKUSER)
USERLIST(….)

REFRESH SECURITY

When I ran my sample connect program, it connected using C1 as before.

On the MQ Server, I changed the SSLCIPH to the new value for C1.

When I ran my sample connect program it connected using channel(C2). In the AMQERR01.LOG I had the message

AMQ9631E: The CipherSpec negotiated during the SSL handshake does not match the required CipherSpec for channel ‘C1′

So the changed channel did not connect, but the second channel with the old cipher spec worked succesfully. (The use of the backup channel was transparent to the application)

I then changed DEF CHANNEL(C1) CHLTYPE(CLNTCONN) so SSLCIPH had the correct, matching value. When my sample program was run, it connected using channel C1 as expected.

Once I have changed all my channels, and get no errors in the error log.

  • I can change the CHLAUTH(C2) BLOCKUSER(*) and either set warning, or give no warning and no access
  • Remove C2 from the CCDT queue manager, so applications no longer get this in their CCDT
  • Finally delete the channel C2 on the server.
  • Go down the pub to celebrate a successful upgrade!


Should I do in-place or side by side migration of MQ mid-range?

With mid-range MQ there are a couple of migration options:

  • Upgrade the queue manager in place – if there are problems, restore from backup, and sort out the problems this restore may cause. You may want to do this is you have just the one queue manager.
  • Upgrade the queue manager in place – if there are problems, leave it down until any problems can be resolved. This assumes that you are a good enterprise user and have other queue managers available to process the work.
  • Create another queue manager, “next to it” (“side by side”) on the same operating system image. A better description might be “adding a new queue manager to our environment on an existing box, and deleting an old one at a later date” rather than “side by side migration”. You may already have a document to do this.

What do you need to do for in-place migration.

  • Backup your queue manager see a discussion here
  • Shut down the queue manager, letting all work end cleanly
  • Either (see here)
    • Delete the previous version of MQ, and install the new version, or better..
    • Use Multi-install – so you have old and new versions available at the same time
  • Switch to the new version (of the multi-install)
  • Restart the queue manager
  • Let work flow
  • Make note of any changes you make to the configuration – for example alter qlocal… in case you need to restore from a backout, and re-apply the changes.

If you need to backout the migration and restore from the backout

You need to

  • Make sure there are no threads in doubt
  • Make sure all transmission queues are empty (so you do not overwrite messages when you restore from a backup)
  • Make sure all transmission queues are empty ( so you do not overwrite messages when you restore from a backup)
  • Offload messages from application queues – if you are lucky there will be no messages. Do not offload messages from the system queues.
  • Shut down MQ
  • Reset the MQ installation to the older version
  • Restore from your backup see here
  • Any MCA channels which have been used may have the wrong sequence numbers, and will need to be reset
  • Load messages back onto the application queues
  • Reapply any changes, such as alter QL…

In the situation where you have a problem, personally I think it would be easier to leave the queue manager down, rather than trying to restore it from a backup. You may want to offload any application messages first. Of course this is much easier if you have configured multiple queue managers, and leaving one queue manager shut down should not cause problems. Until any problems are fixed you cannot proceed with migrating other queue managers, and you may have the risk of lower availability because there is one server less.

What you need to do for side by side migration.

“Side by side” migration requires a new queue manager to be created, and work moved to the new queue manager

  • If this is a cluster repository, you need to move it to another queue manager if only temporarily (otherwise you will get a new repository)
  • You need a new queue manager name
  • You need a new port number
  • Create the queue manager
  • You may want to alter qmgr SCHINIT (MANUAL) during the configuration so that you do not get client applications trying to connect to your new queue manager before you are ready.
  • You need to backup all application object definitions, chlauths etc and reapply them to the new queue manager. Do not copy and restore the channels
  • Apply these application objects to the new queue manager
  • List the channels on the old system
  • Create new channels – for example cluster receiver, with CONNNAME will need the updated port, and a new name
  • You should be able to reuse any sender channels unchanged
  • If you are using CCDT
    • Define new client SVRCONN names (as a CCDT needs unique channel names)
    • On the the queue manager which creates the CCDT, create new Client CLNTCONN channels. The queue manager needs unique names
    • Send the updated CCDT to applications which use this queue managers, so they can use the new queue manager. Note: From IBM MQ Version 9.0, the CCDT can be hosted in a central location that is accessible through a URI, removing the need to individually update the CCDT for each deployed client. See here
    • If you are using clustered queues, then cluster queues will be propagated automatically to the repository and to interested queue managers
    • If you are not using clustering, you will need to create sender/receiver channels, and create the same on the queue managers they attach to
  • Update automation to take notice of the queue managers
  • Change monitoring to include this queue manager
  • Change your backup procedures to back up the new queue manager files
  • Change your configuration and deployment tools, so changes to the old queue manager are copied to the new queue manager as well.
  • Configure all applications that use bindings mode, to add the new queue manager to the options. Restart these applications so they pick up the new configuration
  • When you are ready use START CHINIT
  • Alter the original queue manager to be qmgr SCHINIT (MANUAL), so when you restart the queue manager it does not start the chinit, and so channels will not workload.
    • Note there is a strmqm -ns option. The doc says… This prevents any of the following processes from starting automatically when the queue manager starts:
    • The channel initiator
    • The command server
    • Listeners
    • Services
    • This parameter also runs the queue manager as if the CONNAUTH attribute is blank, regardless of its current value. This allows unauthenticated access to the queue manager for locally bound applications; client applications cannot connect because there are no listeners. Administrative changes must be made by using runmqsc because the command server is not running.
    • But you may not want to run unauthenticated.
  • Stop the original queue manager, after a short time, all applications should disconnect, and reconnect to the new queue manager.
  • Shut down the old queue manager, and restart it. With SCHINIT (MANUAL) it should get no channels running. Stop any listeners. If you have problems you can issue START CHINIT and START LSTR. After a day shut down the queue manager and leave it down – in case of emergency you can just restart it.
  • After you have run successfully for a period you can delete the old queue manager.
  • Remove it from any clusters before deleting it. The cluster repository will remember the queue manager and queues for a long period, then eventually delete them.
  • Make the latest version of MQ the primary installation, and delete the old version
  • Update the documentation
  • Update your procedures – eg configuration automation

As I said at the beginning – an in-place migration looks much easier to do.