Can I share a VSAM file (ZFS) between systems?

I had the situation where I am using ZD&T – which is a z/OS emulator running on Linux, where there 3390 disks are emulated on Linux files. I have an old image, and a new image, and I want to use a ZFS from the new image on the old image to test out a fix.

The high level answer to the original question is “it depends”.

Run in a sysplex

This is how you run in a production environment. You have a SYSPLEX, and have a (master) catalog shared by all systems. I cannot create the environment in zD&T. Setting up a sysplex is a lot of work for a simple requirement.

Copy the Linux file

Because the 3390 volumes are emulated as Linux files, you can copy the Linux file and use that file in the old zPTD image, and avoid the risk of damaging the new copy. The Linux file name is different, but the VOLID is the same. I was told you can use import catalog to get this to work. I haven’t tried it.

The cluster is in a shared user catalog.

If the VSAM cluster is defined in a user catalog, and the user catalog can be used on both systems, then the cluster can be used on both systems (but not at the same time). When the cluster is used, information about the active system is stored in the cluster. When the file system is unmounted, or OMVS is shutdown, this system information is removed. If you do not unmount, or shutdown OMVS cleanly, then when the file system is mounted on the other system, the mount will detect the file system was last used on another system, and wait for a minute or so to make sure the other system is inactive. If the mount command is issued during OMVS startup OMVS will wait for this time. If you have 10 file systems shared, OMVS will wait for each in turn – which can significantly delay OMVS start up.

When the cluster is in the master catalog

Someone suggested

You could mount the volume to your new system and import connect the master catalog of the old system to the new one and define the old alias for the ZFS in the new master pointing to the old master which is now a user catalog to the new system.  If it’s not currently different, you could rename it on the old system to a new HLQ that is different from the existing one and then do the import connect of the master as a usercat and define the new alias pointing to the old ZFS.

This feels too dangerous to me!

Pax the files in the directory

You can use Pax to unload the contents of the directory to a dataset, then load the data from the dataset on the other system.

cd /usr/lpp....
pax -W “seqparms=’space=(cyl,(10,10))'” -wzvf “//’COLIN.PAX.PYMQI2′” -x os390 .

On the other system

mkdir mydir
cd mydir
pax -rf “//’COLIN.PAX.PYMQI2A'” .

Note when using cut and paste make sure you have all of the single quotes and double quotes. I found they sometimes got lost in the pasting.

Using DFDSS

See Migrating an ADCD z/OS release: VSAM files

I can’t even spell Ansible on z/OS

The phrase “I can’t even spell….” is a British phrase which means “I know so little about this that I cannot even pronounce or write the word.”

I wanted to see if I could use Ansible to extract some information from z/OS. There is a lot of documentation available, but it felt like the documentation started at chapter 2 of the instruction book, and missing the first set of instructions.

Below are the instructions to get the most basic ping request working.

On z/OS

Ansible is a python package which you need to install.

pip install ansible-core

This may install several packages

It is better to do this in an SSH terminal session rather than from ISPF -> OMVS. For example it may display a progress bar.

On Linux

Setup

sudo apt install ansible

I made a directory to store my Ansible files in

mkdir ansible
cd ansible

There is some good documentation here.

Edit the inventory.ini

[myhosts]
10.1.1.2

[myhosts:vars]
ansible_python_interpreter=/usr/lpp/IBM/cyp/v3r12/pyz/bin/python

Where

  • [myhosts]… is the IP address of the remote system.
  • [myhosts:vars] ansible_python_interpreter=… is needed for Ansible to work. It it the location of Python on z/OS.

Check the connection

Ansible uses an SSH session to get to the back end.

ssh colin@10.1.1.2

I have set this up for password less logon. Check it works before you try to use Ansible.

Try the ping

ansible myhosts -u colin -m ping -i inventory.ini

Where

  • -i inventory.ini specifies the configuration file
  • myhosts which sections in the configuration file
  • -u colin logon with this userid
  • -m ping and issue this command

When this worked I got

10.1.1.2 | SUCCESS => {
"changed": false,
"ping": "pong"
}

The command took about 10 seconds to run.

You may not need to specify the -u information.

What can go wrong?

I experienced

Invalid userid

ansible myhosts -u colinaa -m ping -i inventory.ini

10.1.1.2 | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: colinaa@10.1.1.2: Permission denied (publickey,password).",
"unreachable": true
}

This means you got to the system, but you specified an invalid user, or the userid was unable to connect over SSH.

Python configuration missing

ansible myhosts -u colin -m ping -i inventory.ini

This originally gave me

[WARNING]: No python interpreters found for host 10.1.1.2 (tried ['python3.12', 'python3.11',
'python3.10', 'python3.9', 'python3.8', 'python3.7', 'python3.6', '/usr/bin/python3',
'/usr/libexec/platform-python', 'python2.7', '/usr/bin/python', 'python'])
10.1.1.2 | FAILED! => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"module_stderr": "Shared connection to 10.1.1.2 closed.\r\n",
"module_stdout": "/usr/bin/python: FSUM7351 not found\r\n",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 127
}

Edit the inventoy.ini and add the ansible_python_interpreter information.

[myhosts]
10.1.1.2

[myhosts:vars]
ansible_python_interpreter=/usr/lpp/IBM/cyp/v3r12/pyz/bin/python