Migrating an ADCD z/OS release to the next release.

With traditional z/OS you upgrade products one at a time. For example, you use one release of CICS, install the next release of CICS, and check it works. You then roll it out to all z/OS images. You take the next product and repeat it.

With z/OS you create a new IPL image, using all of the old components, such as master catalog, page sets etc, do a test IPL and resolve any problems.

With ADCD you have one complete system for example with system data sets on volumes beginning with D4, and the next “release” has volumes beginning with D5. It is not just a matter of IPLing the new system because your data, and any configuration you did will not be on the new system.

In this post, I’ll cover some of the steps you need to take to be able to run on a newer level of ADCD. The list will not be complete or detailed enough, so please let me have any suggestion and comments to improve it.

What to you need to think about?

There is a series of pre-requisites that are needed in the migration process.

My initial thoughts list is

  • Define the zPTD devmap configuration file with the new disks, so the new and old systems can see all of the DASD volumes
  • In any old configuration you want to use, change explicit volumes to their symbolic volume. The operator command D SYMBOLS gives you the symbol names. For example change ….Z24C… to …&SYSVER… This means it will pick up the current level which ever system you have IPLed
  • Import any user catalogs
  • Define the alias for COLIN.* data sets into the new master catalog. You can now use COLIN.* data sets from the new system.
  • Use your old RACF database, or make a copy for the new system to use
  • Copy system Unix files. For example in /etc/. It is easiest to backup and restore them
  • Copy user Unix files. If you have a ZFS for all your files, this may be as simple as mounting it on the newer system. If you have files in the system provided file system, you will have to backup and restore them, or move them to your ‘user ZFS’.
  • Copy across your members in USER.* members. it is worth reviewing these and deleting old ones which are not used. I compared the USER.old.parmlib members with the ADCD.old.parmlib member to see what changes I had made. 
  • Once you IPL the new system it will use your new members
  • Copy across ICSF definitions
  • Check out all started tasks.

First steps

You need to copy data sets and files from the old system to the new system, for example user.z24c.proclib to user.z25d.proclib. It is easiest to have the volumes from both systems available to the z24 and z25 systems.

The user.z24c.proclib will be cataloged on the z24 system, and the user.z25d.proclib will be cataloged on the z25 system, so you’ll need to cross catalog the data sets. These configuration data sets will be on the xxCFG1 volume, so you can use ISPF 3.4, specify the volume and use the ‘C’ line command to catalog the datasets so they can be seen on both systems.

User data – for example all my RACF define jobs are in a PDS under COLIN. These are in a user catalog which can be imported into the newer system.

You need to create an alias in the master catalog

DEFINE ALIAS (NAME(COLIN ) RELATE('A4USR1.ICFCAT') )

With this, I can now refer to COLIN.* data sets from the older system.

Detailed instructions

TCPIP

I had to change the TCPIP Resolver (of IP names and addresses).

In ADCD.Z31A.TCPPARMS(GBLTDATA) I changed the commented LOOKUP statement. Then I stopped and restarted the RESOLVER. Without this change requests to lookup caused 30 second delays.

; LOOKUP DNS LOCAL 
; Colin's change 
  LOOKUP     LOCAL 

Using the RACF DDS server I got messages

ICH420I PROGRAM GPMDDSRV FROM LIBRARY SYS1.SERBLNKE CAUSED THE
ENVIRONMENT TO BECOME UNCONTROLLED.

I also needed to do the RACF commands

RALTER PROGRAM * ADDMEM('SYS1.SERBLNKE'//NOPADCHK)
RALTER PROGRAM * ADDMEM('SYS1.SGRBLINK'//NOPADCHK)
SETROPTS WHEN(PROGRAM) REFRESH

Save and copy /etc

Which of my ADCD disks should I move to my SSD device?

I’m working on moving to a newer version of ADCD, but I do not have enough space for all of the ADCD disks, on my SSD drive, so I am using an external USB device. Which of my new files should I move off the USB drive onto my SSD device for best performance?

Background

How much free space do I have on my disk?

The command

df -P /home/zPDT

gave

Filesystem   1024-blocks     Used Available Capacity Mounted on
/dev/nvme0n1p5 382985776 339351984 24105908      94% /home/zPDT

This shows there is not much free space. What is using all of the space?

ls -lSr

the -S is sort by size largest first, the -r is reverse sort, so the largest comes last.

This showed me lots of old ADCD files which I could delete. After I deleted them, df -P showed the disk was only 69% full.

zPDT “disks”

Each device as seen by zPDT is a process. For example

$ps -ef |grep 5079
colin 5079 4792 0 10:21 ? 00:00:00 awsckd --dev=0A94 --cunbr=0001

So process with pid 5079 is running a program awsckd passing in the device number 0A94

Linux statistics

You can access Linux statistics under the /proc tree.

less /proc/5079/io

gave

rchar: 251198496
wchar: 79167416
syscr: 4525
syscw: 1403
read_bytes: 78671872
write_bytes: 78655488
cancelled_write_bytes: 0

rchar: characters read

The number of bytes which this task has caused to be read from storage. This is simply the sum of bytes which this process passed to read(2) and similar system calls. It includes things such as terminal I/O and is unaffected by whether or not actual physical disk I/O was required (the read might have been satisfied from pagecache).

wchar: characters written

The number of bytes which this task has caused, or shall cause to be written to disk. Similar caveats apply here as with rchar.

read_bytes: bytes read

Attempt to count the number of bytes which this process really did cause to be fetched from the
storage layer. This is accurate for block-backed filesystems.

write_bytes: bytes written

Attempt to count the number of bytes which this process caused to be sent to the storage layer.

How to find the hot files

Use the Linux command

grep read_bytes -r /proc/*/io |sort -k2,2 -g

This finds the read_bytes for each process. It then sorts numerically (-g) and displays the output. For example

/proc/5088/io:read_bytes: 55910400
/proc/5078/io:read_bytes: 61440000
/proc/5091/io:read_bytes: 72916992
/proc/5079/io:read_bytes: 78671872
/proc/5076/io:read_bytes: 138698752
/proc/5074/io:read_bytes: 321728512

You can then display the process information

ps -ef |grep 5074

Which gave

… awsckd –dev=0A80 –cunbr=0001

From the devmap ( or z/OS) device 0A80 is C4RES1.

The disks with the most read activity were (in decreasing order) C4RES1, C4SYS1, C4PAGA, USER02, C4CFG1, C4USS1