How do you download and use a dataset from z/OS.

Transferring a dataset from z/OS to Windows or Linux and using it can be a challenge.

A record in a data set on z/OS has a 4 byte Record Descriptor Word on the front of the record. The first two bytes give the length of the record (and the other two bytes are typically 0)

FTP has two modes for transferring data ASCII and BIN.

ASCII

With ASCII mode, FTP reads the record,

  • Removes the RDW
  • Converts it from EBCDIC to ASCII
  • Adds a “New Line” character to the end of data
  • Sends the data
  • Writes the data to a file stream.

On Unix and Windows a text file is a long stream of data. When the file is read, a New Line character ends the logical record, and so you display the following data on a “New Line”.

Binary mode

Binary mode is used when the dataset has hexadecimal content, and not just printable characters. The New Line hex character could be part of a some hexadecimal data, so this character cannot be used to delineate records.

FTP has an option for RDW

quote site RDW

The default is RDW FALSE.

If RDW is FALSE then FTP removes the RDW from the data before sending it. At the remote end, the data is a stream of data, and you have no way of identifying where one logical record ends, and the next logical record starts.

If RDW is TRUE, then the 4 byte RDW is sent as part of the data. The application reading the file can read the information and calculate where the logical record starts and ends.

For example on z/OS the dataset has (in hex) where the bold data is displayed when you edit or browse the dataset. The italic data is not displayed.

00040000C1C2C3C4
00020000D1CD2
00050000E1E2E3E4E5

If the data was transmitted with RDW FALSE the data in the file would be

C1C2C3C4D1D2E1E2E3E4E5

If the data was transmitted with RDW TRUE the data in the file would be

00040000C1C2C3C400020000D1CD200050000E1E2E3E4E5

Conceptually you can process this file stream using C code:

short RDW;  // 2 byte integer
short dummy; // 2 byte integer

RDW = fread(2); // get the length
dummy = fread(2); // ignore the 0s
mydata = fread(RDW -4); // -4 for the RDW already read

...
RDW = fread(2); // get the length
dummy = fread(2); // ignore the 0s
mydata = fread(RDW -4); // -4 for the RDW already read

(Thanks to pjfarley3 who pointed out the RDW length includes the 4 byte RDW – so the application data length is RDW -4.)

In practice this will not work because z/OS has numbers which are Big Endian, and X86 and ARM machines are Little Endian. (With Big Endian – the left byte is most significant, with Little Endian, the right bit is most significant – the bytes are transposed.)

On z/OS 0x0004 is decimal 4. On X86 and ARM 0x0400 is 4.

In practice you need code on X86 and ARM, like the following, to get the value of a half word from a z/OS data set.

char RDW[2];  // 2 characters
RDW = fread(2); // get the length
length = 256 * RDW[0] + RDW[1]

and similarly for longer integers.

Python

If you are using the Python struct facility, you can pass a string of data types and get the processed values.

  • The string “>HH” says two half words, and the > says the numbers are Big Endian.
  • The string “<HH” says two half words and the < says they are Little Endian
  • The string “HH” says two half words – read in the default representation.

Conversion

You’ll need to do your own conversion from EBCDIC to ASCII to make things printable!

How do you trust a file?

I was asked this question by someone wanting to ensure their files have not been hacked. In the press there are articles where bad guys have replaced some code with code that steals credentials, or it allows an outsider access to your machine. One common solution to trusting a file uses cryptography.

There are several solutions that do not work

Checking the date of a file.

This does not work because there are programs that allow you to change the date and time of files.

Checking the number of bytes

You can list a file’s properties. One property is the size of the file. You could keep a list of file, and file size.

There are two problems

  1. You can change the contents of the file without changing the size of the file. I’ve done this. Programs used to have a patch area where skilled people could write some code to fix problems in the program.
  2. Someone changes the size of the file – but also changes your list to reflect the new size, and then changes the date on file and your list so they look as if they have not changed.

Hashing the file contents

Do a calculation on the contents of the file. A trivial function to implement and easy to exploit, is to treat each character as an unsigned integer, and add up all of the characters.

A better hashing function is to do a calculation cs = mod(c **N,M). For example when the current character is 3, n is 7 and m is 13; find the remainder of 3*3*3*3*3*3*3 when divided by 13, the answer is 3. N and M should be very large. Instead of using one character you take 8 or more. You then apply the algorithm on the file.

cs = 0
do 8 bytes of the file at a time
cs = mod((cs + the 8 bytes)** N,M)
end
display cs

Some numbers N and M are better that others. Knowing the value cs, you cannot go back and recreate the file.

If you just store the checksum value in a file, then the bad guys can change this file, and replace the old checksum with the new checksum of the file with their change. It appears that doing a checkum on the file does not help.

Cryptography to the rescue

To make things secure, there are several bits of technology that are required

  • Public and private keys
  • How do I trust what you’ve sent me

Public and private keys

Cryptography has been around for thousands of years. This typically had a key which was use to encrypt data, and the same key could be used to decrypt the data.

The big leap in cryptography was the discovery of asymmetric keys where you need two keys. One can be used for encryption, and you need another for decryption. You keep the one key very secure (and call it the private key) and make the other key publicly available (the public key). Either key can be used to encrypt, and you need the other key to decrypt.

The keys can be used as follows

  • You encrypt some data with my public key. It can only be decrypted by someone with my private key.
  • I can encrypt some data with my private key and sent it to you. Anyone with my public key can decrypt it. In addition, because they had to use my public key, then they know it came from me (or, to be more accurate, someone with my private key).

How do I trust what you’ve sent me

I would be very suspicious if I received an email saying

This is your freindly bank heree. Please send us your bank account details with this public key. Pubic keys are very safe and we are the only peoples who can decrypt what you send me.

Digital certificates and getting a new passport

A public certificate has

  • Your name
  • You address such as Country=GB, Org=Megabank.com,
  • Your public key
  • Expiry date
  • What the certificate can be used for

I hope the following analogy explains the concepts of digital certificates.

Below are the steps required to get a new passport

  • You turn up at the Passport Office with your birth certificate, a photograph of you, a gas bill, and your public certificate.
  • The person in the office checks
    • that the photo is of you.
    • your name is the same as the birth certificate
    • the name on the gas bill matches your birth certificate
    • the address of the gas bill is the same as you provided for your place of residence.
  • The office creates the passport, with information such as where you live (as evidenced by the gas bill)
  • The checksum of your passport is calculated.
  • The checksum is encrypted with the Passport Office’s PRIVATE key.
  • The encrypted checksum and the Passport Office’s PUBLIC key are printed, and stapled to the back of the passport
  • The passport is returned to you. It has been digitally signed by the Passport Office.

How do I check your identity?

At the back of MY passport is the printout of the Passport Offices’ public key. I compare this with the one attached yo your passport – they match!

I take the encrypted checksum from your passport, and decrypt it using the Passport Office’s public key (yours or mine – they are the same). I write this on a piece of paper.

I do the same checksum calculation on your passport. If the value matches what is on the piece of paper, then you can be confident that the passport has not been changed, since it was issued by the Passport Office. Because I trust the Passport Office, I trust they have seen your birth certificate, and checked where you live, and so I trust you are who you say you are.

But..

Your passport was issued by the London Passport Office, and my passport was issued by the Scottish Passport Office, and the two public certificates do not match.

This problem is solved by use of a Certificate Authority(CA)

Consider a UK wide Certificate Authority office. The Scottish Passport Office sent their certificate (containing, name address and public key) to the UKCA. The UKCA did a checksum of it, encrypts the checksum with the UKCA PRIVATE key, attached the encrypted checksum, and the UKCA public certificate to the certificate sent in – the same process as getting a passport.

Now when the Scottish Passport office process my passport, they do the checksum as before, and affix the Scottish Passport Offices’ public certificate as before… but this certificate has a copy of the UKCA’s certificate, and the encrypted checksum stuck to it. The passport now has two bits of paper stapled to it, the Scottish Passport Office’s public certificate, and the UKCA’s public certificate.

When I validate your passport I see that the London Passport office’s certificate does not match the Scottish Passport Offices certificate, but they have both been signed by the UKCA.

  • I compare the UKCA’s public certificates – they match!
  • I decrypt the checksum from the London office using the UKCA’s certificate and write it down
  • I do the same checksum calculation on the London offices’s certificate and compare with what is written down. They match – I am confident that the UKCA has checked the credentials of the London office
  • I can now trust the London certificate, and use it to check your passport as before.

What happens if I do not have the UKCA certificate

Many “root” certificates from corporations, are shipped on Windows, Linux, z/OS, Macs etc. The UKCA goes to one of these corporations, gets their certificate signed, and includes the corporations certificate attached to the UKCA certificate. Of course it costs money to get your certificate signed by one of these companies

You could email the UKCA certificate with the public key to every one you know. This has the risk that the bad guys who are intercepting your email, change the official UKCA certificate with their certificate. Plan b) would be to ship a memory stick with the certificate on it – but the same bad guys could be monitoring your mail, and replace the memory stick with one of theirs.

How does this help me trust a file?

The process is similar to that of getting a passport.

My “package” has two files abx.txt and xyz.txt

At build time

  • Create the files abc.txt and xyz.txt
  • Calculate the checksum of each file, and encrypt the value – this produces a binary file for each abc.txt.signature
  • Create a directory with
    • Your public certificate/public key
    • A directory containing all of the signature files
    • A list of all of the files in the signature directory
    • A checksum of the directory listing. directory.list.signature

You ship this file as part of your product.

When you install the package

  • Validate the certificate in the package against the CA stored in your system.
  • Decrypt the list of files in the directory (directory.list.signature). Check the list of files is valid
  • For each line in the directory list, go through the same validations process with the file and it’s signature.

For the paranoid

Every week calculate the checksum of each file in the package and sent it to a remote site.

At the remote site compare the filename, checksum combination against last week’s values.

If they do not match, the files have been changed.

Of course if your system has been hacked, the bad guys may be intercepting this traffic and changing it.

How do I do it?

I have a certificate mycert.pem, and my private key mycert.private.pem. It was signed by ca256.

Build

Run the command against the first file

openssl dgst -sign mycert.key.pem abc.txt   > abc.txt.signature

Move the abc.txt.signature to the package’s directory,

Create the trust package

/
.. /mycert.pem
.. /directory.list.txt
.. /directory.list.txt.signature
.. /signatures/
.. .. /abc.txt.signature
.. .. /xyz.txt.signature

Validate the package

Validate the certificate in the package.

openssl verify -CAfile ca256.pem mycert.pem 

extract the public key from the certificate.

openssl x509 -pubkey -noout -in mycert.pem > mycert.pubkey

validate the checksum of the abc file using the public key.

openssl dgst -verify ./mycert.pubkey  -signature abc.txt.signature  abc.txt

Does it work with data sets ?

On z/OS I created a signature file with

openssl dgst -sign tempcert.key.pem  "//'COLIN.JCL(ALIAS)'"  > jcl.alias.signature

and validated it with

openssl dgst -verify tempcert.pubkey -signature jcl.alias.signature  "//'COLIN.JCL(ALIAS)'"   

Formatting SYSADATA from HLASM

I wanted to extract information about DSECTS from the SYSADATA output from compiling an assembler program on z/OS.

On the whole it was pretty easy – but had some surprises!

I’ve put some Python code up on github. It runs on my z/OS.

Where is the record layout documented?

The record layout is documented in HLASM V1R6 Programmer’s Guide. Dsects for the various types are provided in HLA.SASMMAC1(ASMADATA).

I used record type 0x0042 for symbols. To get a record in this section of the ADATA it needs a label.

For example

      DSECT COLIN2
ABCD DS CL8
ABCDE DS CL8

This will not produce a record for the DSECT – because it does not have a label.

ESDID – section names

Each DSECT or CSECT will have an External Symbol Directory ID.

  • CSECT start at 1, and increment, so 1,2
  • DSECTs start at -1 and decrement 4294967295= 0xffffffff(-1), 4294967294 = 0xfffffffe(-2)

Field order

The order of records seems to be random. The CSECT/DSECT statement is often after some fields in the CSECT/DSECT.

To find the xSECT for each symbol, I saved the SECT name and ESDID, and post processed the list of symbols by adding the xSECT information afterwards from the ESDID.

Field offsets

The offsets in each record seem to be the offset from the first instruction. I had to save the offset from the CSECT statement, then post process the records to calculate (offset of symbol in CSECT) = symbol offset – start_of_CSECT offset.

“Problems”

Missing data

To get a record into the ADATA ensure it has a label.

Output from my code

     ESDID              Symbol  Offset  Length TypeA       SymType    CSECT
4294967295 ABCD 0 8 C OrdinaryLabel COLIN2
4294967295 COLIN2 0 1 J DSECT COLIN2
4294967295 ABCDE 8 8 C OrdinaryLabel COLIN2
1 CSQ6LOGP 0 1 J CSECT CSQ6LOGP
1 LOGP 0 8 D OrdinaryLabel CSQ6LOGP
1 LOGPID 0 2 R OrdinaryLabel CSQ6LOGP
1 LOGPLL 2 2 R OrdinaryLabel CSQ6LOGP
1 LOGPEID 4 4 C OrdinaryLabel CSQ6LOGP
1 LOGPMRTU 8 2 R OrdinaryLabel CSQ6LOGP
1 LOGOPT1 10 1 R OrdinaryLabel CSQ6LOGP
1 LOGOPT2 11 1 R OrdinaryLabel CSQ6LOGP
1 LOGPMCOF 12 2 R OrdinaryLabel CSQ6LOGP
1 LOGPOBPS 16 4 R OrdinaryLabel CSQ6LOGP
1 LOGPIBPS 20 4 R OrdinaryLabel CSQ6LOGP
1 LOGPARCL 24 4 R OrdinaryLabel CSQ6LOGP
1 LOGPWRTH 30 2 R OrdinaryLabel CSQ6LOGP
1 LOGPLVL 32 7 C OrdinaryLabel CSQ6LOGP
1 LOGPLVLN 39 1 R OrdinaryLabel CSQ6LOGP
1 LOGPDMIN 40 2 R OrdinaryLabel CSQ6LOGP
1 LOGPDSEC 42 2 R OrdinaryLabel CSQ6LOGP
1 LOGPCOMP 44 4 R OrdinaryLabel CSQ6LOGP
1 LOGPEND 256 1 U EQU CSQ6LOGP

Re-entrant assembler macros in z/OS

In a C program on z/OS you can code assembler macros. See Putting assembler code inside a C program

I wanted to use a WTO/WTOR macro to put a message onto the operator console. This took a few hours to get working, firstly I needed to understand using Re-entrant macros, and secondly see Using Re-entrant assembler macros in C ASM().

Background to using re-entrant macros in assembler

Programming in assembler

You can write a macro like

   WTO  'Colins message'

This generates code

         WTO   'Colin'               

BRAS 1,LABEL1 Set register 1 to the following DC and jump to the label
LENDATA DC AL2(9)
DC B'0000000000000000'
DC C'Colin'
LABEL1 DS 0H
SVC 35

This works.

I can pass a string to the macro – a half word length followed by the data – like LENDATA above. This makes the WTO macro more complex.

You can pass a variable content message to WTO. The code following

  • creates data in the correct format (length followed by the data).
  • The WTO creates the data structure
  • The WTO updates the data structure from the data passed to the macro
* define the variable data, and point register 2 to it
BRAS 2,OVERTEXT
DATA DC AL2(5) Length of string
DC C'Colin' the data to display

* invoke WTO. (2) says the address of the data is in register 2
OVERTEXT DS 0H
WTO TEXT=(2)

* This generates...
CNOP 0,4
BRAS 1,TWOLABA branch around definition
* The WTO data structure
DC AL2(8) TEXT LENGTH
DC B'0000000000010000' MCSFLAGS
TEXTADDR DC AL4(0) MESSAGE TEXT ADDRESS
..
* The instructions to update the data structure from the passed data
WTOLABA DS 0H
LR 14,1 FIRST BYTE OF PARM LIST
SR 15,15 CLEAR REGISTER 15
AH 15,0(1,0) ADD LENGTH OF TEXT + 4
AR 14,15 FIRST BYTE AFTER TEXT
ST 2,4(0,1) STORE TEXT ADDR INTO PLIST
SVC 35 ISSUE SVC 35

Where

  • BRAS 2,OVERTEXT saves in register 2 the address of data, then branches to the label OVERTEXT
  • BRAS 1,WTOLABA saves in register 1 the address of the structure following the instruction, then branches to label WTOLABA
  • ST 2,4(0,1) this saves what register 2 points to – my data; and stores it at the front of the control block data in field TEXTADDR

This fails to execute because the whole program is Re-entrant (RENT) and so the whole program is read only. The program is trying to store register 2 into read only storage.

Solving the RENT problem.

The problem of trying to write into read only storage is solved splitting the above code into two parts and using thread read write storage.

  • define the structure for the constants in read only storage
  • copy the read only structure to thread read write storage
  • use the thread read write storage for the request.

Define the structure for the constants in read only storage

You use an extra parameter on the macro call

WTOS      WTO TEXT=(),MF=L  
ETWOS DS 0H

This generates code

WTOS     WTO   TEXT=(),MF=L                                       
WTOS DS 0F
DC AL2(8) TEXT LENGTH
DC B'0000000000010000' MCSFLAGS
DC AL4(0) MESSAGE TEXT ADDRESS
...
DC AL4(0) WSPARM ADDRESS
ETWOS DS 0H

This has created the data structure. The data is of length EWTOS-WTOS

Copy the read only structure to thread read write storage

In assembler you can use MVC or MVCL to copy from the read only data to thread read/write storage.

Use the thread read write storage for the request.

If the static part of the structure was copied to the block of storage userdata, the code below will issue the WTO request

WTOL   WTO TEXT=((2)),MF=(E,userdata)

This generates the code to update the structure in read only storage

         WTOR  TEXT=((2)),MF=(E,userdata)                                
LA 1,userdata LOAD PARAMETER REG
LR 14,1 FIRST BYTE OF PARM LIST
SR 15,15 CLEAR REG 15
AH 15,0(1,0) ADD LENGTH OF TEXT + 4
AR 14,15 ADDR OF BYTE AFTER TEXT
ST 2,4(1,0) MOVE TEXT INTO PARM LIST
OI 4(14),B'00000000' SET EXTENDED MCS FLAGS
OI 5(14),B'10000000' SET EXTENDED MCS FLAGS2
SVC 35 ISSUE SVC 35

Note:

Some z/OS components use MF=(S,name) to generate the Static structure with the specified name, others use MF=(L).

They all use MF=(E,name).

Moving to the z/OS standard image and onward

For vendors and people like me who used ZD&T or zPDT to run z/OS on an IBM provided emulator on Linux, moving to the new standard image is a challenge.

Below are my thoughts on how to make it easier to use the standard image.

What does migration mean?

The term migration may mean different things to different people.

  • “Production customers” have a z/OS image, and they refresh the products, while keeping userid, user datasets etc. the same. The products (from IBM and vendors) gradually changes over time, typically changing every 3-6 months. This process is well know, and has been used over many decades.
  • With the IBM standard image, IBM makes a new level of z/OS available, and you have to migrate userids, datasets etc into the image. Every 3-6 months there may be a new image available. Moving from one level of standard image to another level of standard image is new and not documented. It looks easy to do it wrong, and make migration hard. It may take time to migrate to the first standard image, but moving to later images should take no more than half an hour.

This blog post is to suggest ways of making it easy to set up the to use the standard image.

Moving to the first standard image may mean a lot of work, but if you do it the right way moving on should be easy.

Setting the direction

My recommendations are (I would welcome discussion on these topics).

A couple of years ago I wrote a series of blog post starting with Migrating an ADCD z/OS release to the next release. A lot of the information is still relevant. Below I’ve tried to refine it for the migration to the standard image.


Restrict what you put into the master catalog.

You can restrict what user put into the master catalog. For example, enforce every data set High level qualifier has a RACF profile, and only allow user catalog entries to be added to the catalog by general users.

See

Ensure you use a user catalog

If your datasets are in a user catalog, then to go to the next standard image, you just import the user catalog. If you’ve cataloged dataset in the master catalog, then these are not immediately transferable to a new system.

Use USER. datasets, not SYS1. datasets

You can configure z/OS so it uses parmlib and proclib datasets you specify. On the ZD&T there are USER.Z31B PROCLIB, PARMLIB, CLIST datasets etc. You can copy/use these on each new standard image.

If you have changed ADCD.* or SYS1.* datasets, you can use ISPF 3.4, then sort on the “changed” column to see members changed since you first used the system. Then move them to the USER.* dataset.

Create resources using JCL rather than issuing commands, or using the ISPF panels

Use JCL to issue commands in batch TSO, rather than issue the commands manually. For example with the standard image you may get one userid (IBMUSER), and you want to create more userids. Have a JCL member with the commands to create the additional userid commands.

Once created, you just submit the JCL for the follow-on standard image.

Have an ordering to the members in your migration dataset.

If you have to define a group before you create a userid which uses this group, then have members R1GROUP, R2USER1, or have multiple PDSEs, eg COLIN.DO1GROUP.JCL, COLIN.DO2USERS.JCL. where the members within a data set can be issued in any order.

OMVS file systems

I have multiple ZFS (file systems) which I mount on the z/OS image. If these are cataloged in the user catalog, they can be mounted on the new system and used.

You need to think about where to mount them. If the new image has been configured to use automount, this can cause problems. Automount is an OMVS facility which can create a ZFS and mount it for a user. You can allocate a ZFS on a per userid basis, so if one userid use lots of disk space, it does not affect other users. They just run out of space.

When automount is active on the /u directory, if I try to mount my file system on /u, for example /u/colinzfs, the mount will fail because /u/colinzfs is already allocated.

You need to use another directory perhaps /my to mount your ZFS on.

If user’s home directory is something like /my/colin, SSH certificates will be available on the new system, without having to set them up again.

Changing files in system file systems

Try to avoid changing the system file systems, for example /etc/ /var, /usr/

If you have changed the system file systems, see here to see which files have changes since you started using the current image, and move them to your own file system.

Userids and OMVS

You can use the RACF autoid facility which allocates a UID for the userid. This means you do not need to mange the list of UIDs. This makes life easier for an administrator, but harder for a standard image user.

If you use the autoid on the current system you may get an UID such as 990021. On the newer image, your userid may be given a difference UID – depending on the order and number of requests made. Having a different UID can cause problems when using your ZFS. For example the files for my userid COLIN have owner with UID 990021. On the newer system, I may get UID 990033. As this UID is different to 990021, I will not have access to my files.

You should consider explicitly allocating a UID which stays with the user.

If you want to extract RACF profiles from the current system. See the extract program. This will create the RACF command needed to define the profiles. You can specify userids, datasets or classes.

Certificates

You can use RACF commands to display and extract keyring information, and certificates (public and private parts). These can be imported on the newer system. This means your client applications will continue to work.

ICSF

You can configure which data sets ICSF uses in the (CSFPRMXX) member in parmlib. Mine are prefixed with COLIN…

Started tasks

Many started tasks associated with OMVS, (or TCPIP) store configuration in /etc/. For example the file /etc/hosts and the directory /etc/ssh.

You may be able to change the started tasks to use files in your ZFS.

For example

//SSHD    PROC 
//SSHD EXEC PGM=BPXBATCH,REGION=0M,TIME=NOLIMIT,
// PARM='PGM /usr/sbin/sshd -f /my/etc/ssh/sshd_config '

What packages are installed?

You can issue

zopen query -i > installed 

to see what is installed

This gave me

Package   Version  File                               Releaseline             
bash 5.3.9 bash-5.3.20260204_143226.zos STABLE
curl 8.18.0 curl-8.18.0.20260205_151329.zos STABLE
git 2.53.0 git-v2.53.0.20260212_134939.zos STABLE
gpg 2.5.17 gnupg-2.5.17.20260130_021013.zos STABLE
jq 1.8.1 jq-jq-1.8.1.20250919_125054.zos STABLE
less 692 less-v692-rel.20260209_153821.zos STABLE
libpsl 1.0.0 libpsl-master.20260102_060204.zos STABLE
libssh2 1.11.1 libssh2-1.11.1.20260102_060940.zos STABLE
meta 0.8.4 meta-main.20260116_055504.zos STABLE
ncurses 6.6 ncurses-6.6.20260129_223023.zos STABLE
openssl 3.6.0 openssl-3.6.0.20260101_102819. STABLE

and

pip list

which gave

Package      Version
------------ -----------
ansible-core 2.20.3
cffi 1.14.6
cryptography 3.3.2
Jinja2 3.1.6
MarkupSafe 3.0.3
packaging 26.0
pip 26.0.1
pycparser 2.20
pysear 0.4.0
PyYAML 6.0.3
pyzfile 1.0.0.post2
resolvelib 1.2.1
six 1.16.0
tzdata 2025.3
zoautil-py 1.2.5.10

How do I allocate a Unix id on z/OS?

To use Unix services (sometimes known as USS) on z/OS, a userid needs a UserID (UID). This, as on Unix,is an integer. A user can be pre-allocated a permanent UID, or be allocated a UID when when needed. See Automatically assigning unique IDs through UNIX services.

Unique or not Unique?

It is good practice for each userid to have a unique UID. If users share the same UID,

  • The users share ownership and access to the same files.
  • If you ask for the userid associated with an id – you may get the wrong answer!

However some super users need a id of 0.

You can set this as shared with

altuser colin OMVS(UID(0)SHARED)

Instead of allocating uid(0) you can use the profile BPX.SUPERUSER resource in the FACILITY class to get the authority to do most of the tasks that require superuser authority.

  1. You can explicitly specify an id which you allocate (this means you need a list of ids and owners, so you know which ids are free).
  2. You can have z/OS do this for you. See Enabling automatic assignment of unique UNIX identities.

You can use ADDUSER COLIN OMVS(AUTOUID) which allocates an available UID.

Should I used AUTOID?

I run z/OS on a zD&T image. Every 6 months or so there is a new level of z/OS which I can download. I then need to migrate userid, datasets etc to this new system. This is different to a normal customer z/OS where you have an existing system and you migrate a new version of z/OS into it.

I have ZFS file systems for all of my user data.
On the current system my userid COLIN was automatically allocated as 0000990021. Files that I own have this id.

When I get my next system, if I allocate userid COLIN with AUTOUID, it may get a different UID say 990011. Because my userid 990011 is different to the owner of the files 990021, I may not be able to access “my” files.

I could change all of my files to have a new owner (and group), or I could ensure my userid on both systems is the same 990021. Using the same UID was much easier.

How is the range of AUTOIDs defined?

This is done with the RACF FACILITY profile BPX.NEXT.USER. On my system has has

APPLICATION DATA 990041-1000000/990020-1000000

Can I define a model profile?

You can configure OMVS to automatically give a userid a UID (if it does not have one) and define the rest of the OMVS profile using a model OMVS segment. See Steps for automatically assigning unique IDs through UNIX services.

Users need a home directory

Users need a home directory. There are several ways of doing this.

  • Give users an entry HOME(‘/u/mostusers’). Every one shares the same directory – not a good idea, because they would all share the SSH keys etc.
  • You could specify HOME(‘/u/mostusers/&racuid’) and specify the userid as part of the definition. This could be done in the model profile mentioned above. If you use this method you need to create the directory, for example as part of creating the userid.
  • Use automount. See Unix services automount is a blessing and curse. Where you define a template and the hard word is done for you. For example for each userid create a ZFS and use that.

I only use a few userids, so manually allocating the userid and the home directory was easy to do.

Note: If you use automount of a directory, such as /u/, you cannot mount other file systems in /u/; you would have to use a different directory, for example /usr/.

How do I create a load module in a PDS from Unix?

This is another of the little problems which are easy once you know the anwser.

I used the shell program to compile my program.

name=extract 

export _C89_CCMODE=1

p1="-Wc,arch(8),target(zOSV2R3),list,source,ilp32,gonum,asm,float(ieee)"
p7="-Wc,ASM,ASMLIB(//'SYS1.MACLIB') "
p8="-Wc,LIST(c.lst),SOURCE,NOWARN64,XREF,SHOWINC -Wa,LIST(133),RENT"

# compile it
xlc $p1 $p7 $p8 -c $name.c -o $name.o

l1="-Wl,LIST,MAP,XREF,AC=1 "
# create an executable in the file system
/bin/xlc $name.o -o $name -V $l1 1>a
extattr +a $name

# create a load module in a PDS
/bin/xlc $name.o -o "//'COLIN.LOAD(EXTRACT)'" -V $l1 1>a

Create an executable in the file system

The first bind xlc step creates an object with name “extract” in the file system.

Specify the load module

The second bind step specified a load module in a PDS. The load module is stored in COLIN.LOAD. If you copy and paste the line, make sure you have the correct quotes ( double quote, //, single quote, dataset(member),single quote,double quote). Sometimes my pasting lost a quote.

Process assembler code

My program has some assembler code…

 asm( ASM_PREFIX 
" STORAGE RELEASE,...
:"r0", "r1" , "r15" );

It needs the options “-Wc,ASM,ASMLIB(//’SYS1.MACLIB’) ” to compile it, and specify the location of the assembler macros.

Binder parameters

The line parameters in -Wl,LIST,MAP,XREF,AC=1 are passed to the binder.

Message – wrong suffix on the source file

Without the export _C89_CCMODE=1 I got the message

FSUM3008 Specify a file with the correct suffix (.c, .i, .s, .o, .x, .p, .I, or .a), or a corresponding data set name, instead of -o ./extract.

How do I enter a password on the z/OS console for my program?

I wanted to run a job/started task which prompts the operator for a password. Of course being a password, you do not want it written to the job log for every one to see.

In assembler you can write a message on the console, and have z/OS post an ECB when the message is replied to.

         WTOR  'ROUTECD9 ',reply,40,ecb,ROUTCDE=(9) 
wait 1,ECB=ECB
...
ECB DC F'0'
REPLY DS CL40

The documentation for ROUTCDE says

  • 9 System Security. The message gives information about security checking, such as a request for a password.

When this ran, the output on the console was as follows The … is where I typed R 6,abcdefg

@06 ROUTECD9 
...
R 6 SUPPRESSED
IEE600I REPLY TO 06 IS;SUPPRESSED

With ROUTCDE=(1) the output was

@07 ROUTECD1                      
R 7,ABCDEFG
IEE600I REPLY TO 07 IS;ABCDEFG

With no ROUTCDE keyword specified the output was

@08 NOROUTECD                          
R 8 SUPPRESSED
IEE600I REPLY TO 08 IS;SUPPRESSED

The lesson is that you have to specify ROUTCDE=(1) if you want the reply to be displayed. If you omit the ROUTCDE keyword, or specify a value of 9 – the output is supressed.

Can I do this from a C program?

The C run time _console2() function allows you to issue console messages. If you pass and address for modstr, the _console2() function waits until there is an operator stop of modify command issued for the job. If a NULL address is passed in the modstr, then the message is displayed, and control returns immediately. The text of the modify command is visible on the console.

To get suppressed text you would need to issue the WTOR Macro using __ASM(…) in your C program.

Can I share a VSAM file (ZFS) between systems?

I had the situation where I am using ZD&T – which is a z/OS emulator running on Linux, where there 3390 disks are emulated on Linux files. I have an old image, and a new image, and I want to use a ZFS from the new image on the old image to test out a fix.

The high level answer to the original question is “it depends”.

Run in a sysplex

This is how you run in a production environment. You have a SYSPLEX, and have a (master) catalog shared by all systems. I cannot create the environment in zD&T. Setting up a sysplex is a lot of work for a simple requirement.

Copy the Linux file

Because the 3390 volumes are emulated as Linux files, you can copy the Linux file and use that file in the old zPTD image, and avoid the risk of damaging the new copy. The Linux file name is different, but the VOLID is the same. I was told you can use import catalog to get this to work. I haven’t tried it.

The cluster is in a shared user catalog.

If the VSAM cluster is defined in a user catalog, and the user catalog can be used on both systems, then the cluster can be used on both systems (but not at the same time). When the cluster is used, information about the active system is stored in the cluster. When the file system is unmounted, or OMVS is shutdown, this system information is removed. If you do not unmount, or shutdown OMVS cleanly, then when the file system is mounted on the other system, the mount will detect the file system was last used on another system, and wait for a minute or so to make sure the other system is inactive. If the mount command is issued during OMVS startup OMVS will wait for this time. If you have 10 file systems shared, OMVS will wait for each in turn – which can significantly delay OMVS start up.

When the cluster is in the master catalog

Someone suggested

You could mount the volume to your new system and import connect the master catalog of the old system to the new one and define the old alias for the ZFS in the new master pointing to the old master which is now a user catalog to the new system.  If it’s not currently different, you could rename it on the old system to a new HLQ that is different from the existing one and then do the import connect of the master as a usercat and define the new alias pointing to the old ZFS.

This feels too dangerous to me!

Pax the files in the directory

You can use Pax to unload the contents of the directory to a dataset, then load the data from the dataset on the other system.

cd /usr/lpp....
pax -W “seqparms=’space=(cyl,(10,10))'” -wzvf “//’COLIN.PAX.PYMQI2′” -x os390 .

On the other system

mkdir mydir
cd mydir
pax -rf “//’COLIN.PAX.PYMQI2A'” .

Note when using cut and paste make sure you have all of the single quotes and double quotes. I found they sometimes got lost in the pasting.

Using DFDSS

See Migrating an ADCD z/OS release: VSAM files

I can’t even spell Ansible on z/OS

The phrase “I can’t even spell….” is a British phrase which means “I know so little about this that I cannot even pronounce or write the word.”

I wanted to see if I could use Ansible to extract some information from z/OS. There is a lot of documentation available, but it felt like the documentation started at chapter 2 of the instruction book, and missing the first set of instructions.

Below are the instructions to get the most basic ping request working.

On z/OS

Ansible is a python package which you need to install.

pip install ansible-core

This may install several packages

It is better to do this in an SSH terminal session rather than from ISPF -> OMVS. For example it may display a progress bar.

On Linux

Setup

sudo apt install ansible

I made a directory to store my Ansible files in

mkdir ansible
cd ansible

There is some good documentation here.

Edit the inventory.ini

[myhosts]
10.1.1.2

[myhosts:vars]
ansible_python_interpreter=/usr/lpp/IBM/cyp/v3r12/pyz/bin/python

Where

  • [myhosts]… is the IP address of the remote system.
  • [myhosts:vars] ansible_python_interpreter=… is needed for Ansible to work. It it the location of Python on z/OS.

Check the connection

Ansible uses an SSH session to get to the back end. Check this works before you use Ansible.

ssh colin@10.1.1.2

I have set this up for password less logon.

Try the ping

ansible myhosts -u colin -m ping -i inventory.ini

Where

  • -i inventory.ini specifies the configuration file
  • myhosts which sections in the configuration file
  • -u colin logon with this userid
  • -m ping issue this command

When this worked I got

10.1.1.2 | SUCCESS => {
"changed": false,
"ping": "pong"
}

The command took about 10 seconds to run.

You may not need to specify the -u information.

What can go wrong?

I experienced

Invalid userid

ansible myhosts -u colinaa -m ping -i inventory.ini

10.1.1.2 | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: colinaa@10.1.1.2: Permission denied (publickey,password).",
"unreachable": true
}

This means you got to the system, but you specified an invalid user, or the userid was unable to connect over SSH.

Python configuration missing

ansible myhosts -u colin -m ping -i inventory.ini

This originally gave me

[WARNING]: No python interpreters found for host 10.1.1.2 (tried ['python3.12', 'python3.11',
'python3.10', 'python3.9', 'python3.8', 'python3.7', 'python3.6', '/usr/bin/python3',
'/usr/libexec/platform-python', 'python2.7', '/usr/bin/python', 'python'])
10.1.1.2 | FAILED! => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"module_stderr": "Shared connection to 10.1.1.2 closed.\r\n",
"module_stdout": "/usr/bin/python: FSUM7351 not found\r\n",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 127
}

Edit the inventoy.ini and add the ansible_python_interpreter information.

[myhosts]
10.1.1.2

[myhosts:vars]
ansible_python_interpreter=/usr/lpp/IBM/cyp/v3r12/pyz/bin/python