How do I enter a password on the z/OS console for my program?

I wanted to run a job/started task which prompts the operator for a password. Of course being a password, you do not want it written to the job log for every one to see.

In assembler you can write a message on the console, and have z/OS post an ECB when the message is replied to.

         WTOR  'ROUTECD9 ',reply,40,ecb,ROUTCDE=(9) 
wait 1,ECB=ECB
...
ECB DC F'0'
REPLY DS CL40

The documentation for ROUTCDE says

  • 9 System Security. The message gives information about security checking, such as a request for a password.

When this ran, the output on the console was as follows The … is where I typed R 6,abcdefg

@06 ROUTECD9 
...
R 6 SUPPRESSED
IEE600I REPLY TO 06 IS;SUPPRESSED

With ROUTCDE=(1) the output was

@07 ROUTECD1                      
R 7,ABCDEFG
IEE600I REPLY TO 07 IS;ABCDEFG

With no ROUTCDE keyword specified the output was

@08 NOROUTECD                          
R 8 SUPPRESSED
IEE600I REPLY TO 08 IS;SUPPRESSED

The lesson is that you have to specify ROUTCDE=(1) if you want the reply to be displayed. If you omit the ROUTCDE keyword, or specify a value of 9 – the output is supressed.

Can I do this from a C program?

The C run time _console2() function allows you to issue console messages. If you pass and address for modstr, the _console2() function waits until there is an operator stop of modify command issued for the job. If a NULL address is passed in the modstr, then the message is displayed, and control returns immediately. The text of the modify command is visible on the console.

To get suppressed text you would need to issue the WTOR Macro using __ASM(…) in your C program.

Can I share a VSAM file (ZFS) between systems?

I had the situation where I am using ZD&T – which is a z/OS emulator running on Linux, where there 3390 disks are emulated on Linux files. I have an old image, and a new image, and I want to use a ZFS from the new image on the old image to test out a fix.

The high level answer to the original question is “it depends”.

Run in a sysplex

This is how you run in a production environment. You have a SYSPLEX, and have a (master) catalog shared by all systems. I cannot create the environment in zD&T. Setting up a sysplex is a lot of work for a simple requirement.

Copy the Linux file

Because the 3390 volumes are emulated as Linux files, you can copy the Linux file and use that file in the old zPTD image, and avoid the risk of damaging the new copy. The Linux file name is different, but the VOLID is the same. I was told you can use import catalog to get this to work. I haven’t tried it.

The cluster is in a shared user catalog.

If the VSAM cluster is defined in a user catalog, and the user catalog can be used on both systems, then the cluster can be used on both systems (but not at the same time). When the cluster is used, information about the active system is stored in the cluster. When the file system is unmounted, or OMVS is shutdown, this system information is removed. If you do not unmount, or shutdown OMVS cleanly, then when the file system is mounted on the other system, the mount will detect the file system was last used on another system, and wait for a minute or so to make sure the other system is inactive. If the mount command is issued during OMVS startup OMVS will wait for this time. If you have 10 file systems shared, OMVS will wait for each in turn – which can significantly delay OMVS start up.

When the cluster is in the master catalog

Someone suggested

You could mount the volume to your new system and import connect the master catalog of the old system to the new one and define the old alias for the ZFS in the new master pointing to the old master which is now a user catalog to the new system.  If it’s not currently different, you could rename it on the old system to a new HLQ that is different from the existing one and then do the import connect of the master as a usercat and define the new alias pointing to the old ZFS.

This feels too dangerous to me!

Pax the files in the directory

You can use Pax to unload the contents of the directory to a dataset, then load the data from the dataset on the other system.

cd /usr/lpp....
pax -W “seqparms=’space=(cyl,(10,10))'” -wzvf “//’COLIN.PAX.PYMQI2′” -x os390 .

On the other system

mkdir mydir
cd mydir
pax -rf “//’COLIN.PAX.PYMQI2A'” .

Note when using cut and paste make sure you have all of the single quotes and double quotes. I found they sometimes got lost in the pasting.

Using DFDSS

See Migrating an ADCD z/OS release: VSAM files

I can’t even spell Ansible on z/OS

The phrase “I can’t even spell….” is a British phrase which means “I know so little about this that I cannot even pronounce or write the word.”

I wanted to see if I could use Ansible to extract some information from z/OS. There is a lot of documentation available, but it felt like the documentation started at chapter 2 of the instruction book, and missing the first set of instructions.

Below are the instructions to get the most basic ping request working.

On z/OS

Ansible is a python package which you need to install.

pip install ansible-core

This may install several packages

It is better to do this in an SSH terminal session rather than from ISPF -> OMVS. For example it may display a progress bar.

On Linux

Setup

sudo apt install ansible

I made a directory to store my Ansible files in

mkdir ansible
cd ansible

There is some good documentation here.

Edit the inventory.ini

[myhosts]
10.1.1.2

[myhosts:vars]
ansible_python_interpreter=/usr/lpp/IBM/cyp/v3r12/pyz/bin/python

Where

  • [myhosts]… is the IP address of the remote system.
  • [myhosts:vars] ansible_python_interpreter=… is needed for Ansible to work. It it the location of Python on z/OS.

Check the connection

Ansible uses an SSH session to get to the back end.

ssh colin@10.1.1.2

I have set this up for password less logon. Check it works before you try to use Ansible.

Try the ping

ansible myhosts -u colin -m ping -i inventory.ini

Where

  • -i inventory.ini specifies the configuration file
  • myhosts which sections in the configuration file
  • -u colin logon with this userid
  • -m ping and issue this command

When this worked I got

10.1.1.2 | SUCCESS => {
"changed": false,
"ping": "pong"
}

The command took about 10 seconds to run.

You may not need to specify the -u information.

What can go wrong?

I experienced

Invalid userid

ansible myhosts -u colinaa -m ping -i inventory.ini

10.1.1.2 | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: colinaa@10.1.1.2: Permission denied (publickey,password).",
"unreachable": true
}

This means you got to the system, but you specified an invalid user, or the userid was unable to connect over SSH.

Python configuration missing

ansible myhosts -u colin -m ping -i inventory.ini

This originally gave me

[WARNING]: No python interpreters found for host 10.1.1.2 (tried ['python3.12', 'python3.11',
'python3.10', 'python3.9', 'python3.8', 'python3.7', 'python3.6', '/usr/bin/python3',
'/usr/libexec/platform-python', 'python2.7', '/usr/bin/python', 'python'])
10.1.1.2 | FAILED! => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"module_stderr": "Shared connection to 10.1.1.2 closed.\r\n",
"module_stdout": "/usr/bin/python: FSUM7351 not found\r\n",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 127
}

Edit the inventoy.ini and add the ansible_python_interpreter information.

[myhosts]
10.1.1.2

[myhosts:vars]
ansible_python_interpreter=/usr/lpp/IBM/cyp/v3r12/pyz/bin/python

My certificate has expired – how do I renew it ?

Once you know this is an easy question.

//IBMRACF  JOB 1,MSGCLASS=H 
//S1 EXEC PGM=IKJEFT01,REGION=0M
//SYSPRINT DD SYSOUT=*
//SYSTSPRT DD SYSOUT=*
//SYSTSIN DD *
RACDCERT ID(START1) GENREQ(LABEL('NEWTECCTEST')) -
DSN('COLIN.CERT.REQ')

RACDCERT ID(START1) GENCERT('COLIN.CERT.REQ') -
NOTAFTER( DATE(2027-12-21)) -
SIGNWITH (CERTAUTH LABEL('DOCZOSCA'))
RACDCERT LIST (LABEL('NEWTECCTEST')) ID(START1)
//

The first command takes my existing (expired) certificate belonging to userid START1 and creates a certificate request in the data set. The request looks like

-----BEGIN NEW CERTIFICATE REQUEST-----                               
MIIBgjCCAQcCAQAwNzEUMBIGA1UEChMLTkVXVEVDQ1RFU1QxDDAKBgNVBAsTA1NT
...
qZgQtwIwbYYgRWDQcPOZ92sVszf5Bv+mslcDjNAuM5Sj4Z9uadnKsaTmiy6h16tr
TpPAW84d
-----END NEW CERTIFICATE REQUEST-----

The Gencert command renews it with the specified date. If you omit the date it defaults to a year from the start date.

With most of my gencert requests, I have specified information like

RACDCERT ID(COLIN) GENCERT -                                
SUBJECTSDN(CN('10.1.1.2') -
O('NISTEC256') -
OU('SSS')) -
ALTNAME(IP(10.1.1.2))-
NISTECC -
KEYUSAGE( HANDSHAKE ) -
SIZE(256 ) -

SIGNWITH (CERTAUTH LABEL('DOCZOSCA')) -
WITHLABEL('NISTEC256') -

Because I passed a data set it, the information was taken from the dataset. I think it ignores SUBJECTDSN etc data if a data set is used.

When I specified a 2028 date I got message

IRRD113I The certificate that you are creating has an incorrect date range.  The certificate is added with NOTRUST status.  

The IRRD113I message says

“has an incorrect date range”, the date range of the certificate being added is not within the date range established by the CA (certificate authority) certificate.

This is a hint that I need to renew my CA certificate as it will expire in the next two years.

After the gencert command was successful, the list command gave

Digital certificate information for user START1:                    

Label: NEWTECCTEST
Certificate ID: 2Qbi48HZ4/HVxebjxcPD48Xi40BA
Status: NOTRUST
Start Date: 2026/02/25 00:00:00
End Date: 2027/12/21 23:59:59
Serial Number:
>5B<
Issuer's Name:
>CN=DocZosCA.OU=CA.O=COLIN<
Subject's Name:
>CN=10.1.1.2.OU=SSS.O=NEWTECCTEST<
Subject's AltNames:
IP: 10.1.1.2
Signing Algorithm: sha256RSA
Key Usage: HANDSHAKE
Key Type: NIST ECC
Key Size: 384
Private Key: YES
...

Once I had renewed it, I had to restart the servers using it so they picked up the updated certificate.

Logging on to Git (on z/OS)

I’ve gradually been moving away from being 100% ISPF, and moving to OMVS. I use SSH terminals to access the Command Line Interface (CLI) just like I use on Linux, and I do most of my editing with VScode on Linux accessing the files on z/OS over sshfs so they look as if they are in a local Linux directory.

I wanted to use Git on z/OS. It was easy to install and start using, but I had problems logging on to Git.

As I understand it there are several ways of logging on to Git. I’ve used two, HTTPS and SSH.

HTTPS

You can logon to Git with a userid and a Personal Access Token. A PAT is like a sophisticated password. To get a PAT, go to your Git home page, click on your photo, and click settings. On the public profile page which is displayed, at the bottom of the left hand column is<> Developer settings. Click on this link. Click on Personal access tokens.

Click on Tokens (classic) -> Generate new token (classic). You have to verify, so I clicked send code via email. Copy the PAT.

When you create a new PAT you can specify what the token can do, for example

  • full control of the private repository, or just access the public repository, or access the commit state.
  • can control the public keys
  • delete repositories

Click on generate token. A token is displayed such as ghp_7OSehXd6lP1234Gy0KRvqpmABALX8L618ycad. Copy this and save it somewhere securely. If you lose it, it is easy to delete and create another.

If you use Git using https, for example https://github.com/colinpaiceABC/ColinsRepo it will prompt for userid (colinpaiceABC) and password. Password means use a PAT.

You can store the userid and PAT for scripts etc to use to logon.

When you create the PAT you specify the validity period, for example two weeks, so you will need to have a process in place to renew the token.

SSH

You can logon to Git using SSH. Because keys are stored on your local machine, and on the Git server, you do not need to enter userid and password/PAT each time.

Git has excellent documentation on using ssh.

You need an SSH key. Check in directory ~/ssl, for files like id_….pub I have id_ed25519.pub and id_rsa.pub . If you do not have a key, follow the git documentation to create one.

Once I had my key I used the documentation on how to add it to Git.

Check you are using the ….pub file. It looks like

ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAA...XX/Xk colin@ColinNew

Add it using picture -> settings -> SSH and GPG keys ….

To use this you access Git via

git clone git@github.com/colinpaicemq/MQTools.git

If it doesn’t work as expected.

I got into a mess because I used

git clone https://github.com/colinpaicemq/MQTools.git

to clone the repository. When I tried to update the repository it asked me for userid and password!

You can change whether you use HTTPS or SSH to logon. For example to set SSH

git remote set-url origin git@github.com:colinpaicetest/testrepro.git

See the documentation.

How to stop when blocked.

I hit an interesting challenge when working with SMF-real-time.

An application can connect to the SMF real time service, and it will be sent data as it is produced. Depending on a flag the application can use:

  • Non blocking. The application gets a record if there is one available. If not it gets are return code saying “No records available”.
  • Blocking. The application waits (potentially for ever) for a record.

Which is the best one to use?

This is a general problem, it not just SMF real time that has these challenges.

What are the options- generally?

Non blocking

You need to loop to see if there are records, and if not wait. The challenge is how long should you wait for. If you wait for 10 seconds, then you may get a 10 second delay between the record being created, and your application getting it. If you specify a shorter time you reduce this window. If you reduce the delay time, then the application does more loops per minute and so there is an increase in CPU cost.

Blocking

If you use blocking – there is no extra CPU looping round. The problem is how to do you stop processing your application cleanly when it is in a blocked wait, to allow cleaning up at the end of the processing.

A third way

MQSeries process application messages asynchronously. You can say wait for a message – but time out after a specified time interval if no messages have been received.

This method is very successful. But some application abused it. They want their application to wake up every n second, and check their application’s shutdown flag. If their flag is set, then shutdown.

The correct answer in this case is to have MQ Post an Event Control Block(ECB), the application’s code posts another ECB; gthe mainline code waits for either of the EBCs to be posted, and take the appropriate action. However the lazy way of sleeping, waking, checking and sleeping is quick and easy to code.

What are the options for SMF real time?

With the SMF real time code, while one thread is running – and in a blocked wait, another thread can issue a disconnect() request. This wakes up the waiting thread with a “you’ve been woken up because of a disconnect” return code.

The solution is to use a threading model.

The basic SMF get code

while True:
x, rc = f.get(wait=True) # Blocking get
if x == "" : # Happens after disc()
print("Get returned Null string, exiting loop")
break
if rc != "OK":
print("get returned",rc)
print("Disconnect",f.disc())
i += 1
# it worked do something with the record
...
print(i, len(x)) # Process the data
...

Make the SMF get a blocking get with timeout

With every get request, create another thread to wake up and issue the disconnect, to cause the blocked get to wake up.

This may be expensive doing a timer thread create and cancel with every record.

def blocking_get_loop(f, timeout=10, event=None,max_records=None,):
i = 0
while True:
t = Timer(timeout, timerpop(f)) # execute the timeout function after 10 seconds
x, rc = f.get(wait=True) # Blocking get
t.cancel()

....


# This wakes after the specified interval then executes.
def timerpop(f):
f.disc()

Wait for a terminal interrupt

Vignesh S sent me some code which I’ve taken and modified.

The code to the SMF get is in a separate thread. This means the main thread can execute the disconnect and wake up the blocked request.

# Usage example: run until Enter or interrupt, or max_records
if __name__ == "__main__":
blocking_smf(max_records=6) # execute the code below.

def blocking_smf(stream_name="IFASMF.INMEM", debug=0, max_records=None):
f = pySMFRT(stream_name, debug=debug)
f.conn(stream_name,debug=2) # Explicit connect

# Start the blocking loop in a separate thread
get_thread = threading.Thread(target=mainlogic,args=(f),kwargs={"max_records": 4}))
get_thread.start()


try:
# Main thread: wait for user input to stop
input("Press Enter to stop...\n")
print("Stopping...")
except KeyboardInterrupt:
print("Interrupted, stopping...")
finally:
f.disc() # This unblocks the get() call
get_thread.join() # Wait for thread to exit

The key code is

get_thread = threading.Thread(target=mainlogic,args=(f),kwargs={"max_records": 4}))
get_thread.start()

This attaches a thread, execute the function mainlogic, passing the positional parameters f, and keyword arguments max_records.

The code to do the gets and process the requests is the same as before, with the addition of the count of records processed.

def blocking_get_loop(f, max_records=None):
i = 0
while True:
x, rc = f.get(wait=True) # Blocking get
if x == "" : # Happens after disc()
print("Get returned Null string, exiting loop")
break
if rc != "OK":
print("get returned",rc)
print("Disconnect",f.disc())
i += 1
# it worked do something with the record
...
print(i, len(x)) # Process the data
...
#
if max_records and i >= max_records:
print("Reached max records, stopping")
print("Disconnect",f.disc()) # clean up if ended because of number of records
break

If the requested number of records has been processed, or there has been an error, or unexpected data, then disconnect is called, and the function returns.

Handle a terminal interrupt

The code

try:
# Main thread: wait for user input to stop
input("Press Enter to stop...\n")
print("Stopping...")
except KeyboardInterrupt:
print("Interrupted, stopping...")
finally:
f.disc() # This unblocks the get() call
get_thread.join() # Wait for thread to exit

waits for input from the terminal.

This solution is not perfect because if the requested number of records are processed quickly, you still have to enter something at the keyboard.

Use an event with time out

One problem has been notifying the main task when the SMF get task has finished. You can use an event for this.

In the main logic have

def blocking_smf(stream_name="IFASMF.INMEM", debug=0, max_records=None):
f = pySMFRT(stream_name, debug=debug)
f.conn(stream_name,debug=2) # Explicit connect
myevent = threading.Event()

# Start the blocking loop in a separate thread
get_thread = threading.Thread(target=blocking_get_loop, args=(f,),
kwargs={"max_records": max_records,
"event":myevent})
get_thread.start()
# wait for the SMF get task to end - or the event time out
if myevent.wait(timeout=30) is False:
print("We timed out")
f.disc() # wakeup the blocking get

get_thread.join # Wait for it to finish

In the SMF code

def blocking_get_loop(f, t,max_records=None, event=None):
i = 0
while True:
#t = Timer(timeout, timerpop(f))
x, rc = f.get(wait=True) # Blocking get
#t.cancel()
if x == "" : # Happens after disc()
print("Get returned Null string, exiting loop")
break
if rc != "OK":
print("get returned",rc)
print("Disconnect",f.disc())
i += 1
print(i, len(x)) # Process the data
if max_records and i >= max_records:
print("Reached max records, stopping")
print("Disconnect",f.disc())
break
if event is not None:
event.set() # wake up the main task

s

Why oh why is my application waiting?

I’ve been working on a presentation on performance, and came up with an analogy which made one aspect really obvious…. but I’ll come to that.

This blog post is a short discussion about software performance, and what affects it.

Obvious statement #1

The statement used to be An application is either using, CPU, or waiting. I prefer to add or using CPU and waiting which is not obvious unless you know what it means.

Obvious statement #2

All applications wait at the same speed. If you are waiting for a request from a remote server, it does not matter how fast your client machine is.

Where can an application wait?

I’ll go from longest to shortest wait times.

Waiting for the end user

If you have displayed a menu for an end user to complete, you might wait minutes (or hours) for the end user to complete the information and send it.

Waiting for a remote request

This can be a request to a remote server to do something. This could be to buy something, or simple web look up, or a Name Server lookup. These should all be under a second.

Waiting for disk I/O

If your application is doing database work, such as DB2 there can be many disk I/Os. Any updates are logged to disk for recovery purposes. If your disk response time is typically 1 ms, then you may have to wait several milliseconds. When your application issues a commit, and wants to log data – there will likely to be an I/O in progress – so you have to wait for that I/O to complete before any more data can be written. Typically a database can write 16 4KB pages at a time. If the database logging is very active you may have to wait until any queued data in log buffers is written, before your application’s data can be written. An I/O consists of a set up followed by data transmission. The set up time is usually pretty constant – but more data takes more time to transfer. Writing 16 * 4 KB pages will usually take longer than writing one 4KB page.

An application writing to a file may buffer up several records before writing one record to the external medium. You application wrote 10 records, but there was only one I/O.

These I/Os should be measured in milliseconds (or microseconds).

Database and record locks

If your application want to update some information in a database record it could do

  • Get record for update (this prevents other threads from updating it)
  • Display a menu for the end user to complete
  • When the form has been completed, update the record and commit.

This is an example of “Waiting for the end user”. Another application wanting to update the same record may get an “unavailable” response, or wait until the first application has finished.

You can work around this using logic like

  • Each record has a last updated timestamp.
  • Read the record note the last updated timestamp, display the menu
  • When the form has been completed..
    • Read the record for update from the database, and check the “last updated time”.
    • If the time stamp matches the saved value, update the information and commit the changes.
    • If the time stamp does not match, then the record has been updated – release it, and go to the top and try again.

Coupling Facility access

This is measured in 10s of microseconds. The busier the CF is, the longer requests take.

Latches

Latches are used for serialisation of small sections of code. For example updating storage chains.

If you have two queues of work elements, one queued work, on in-progress work. In a single threaded application you can move a work element between queues. With multiple threads you need some form of locking.

In its simplest form it is

pthread_mutex_lock(mymutex)
work = waitqueue.pop()
active.push(work)
pthread_mutex_unlock(mymutext)

You should design your code so few threads have to wait.

Waiting for CPU

This can be due to

  • The LPAR is constrained for CPU; other work gets priority, and your application is not dispatched.
  • The CEC (physical box) is constrained for CPU and your LPAR is not being dispatched.

If your LPAR has been configured to use only one CPU, and there is space capacity in the CEC your LPAR will not be able to use it.

Waiting for paging etc

In these days of lots of real storage in the CEC, waiting for paging etc is not much of an issue. If the virtual page you want is not available to you the operating system has to allocate the page, and map it to real storage.

Waiting for data – using CPU and waiting.

Some 101 education on computer Z architecture

  • The processors for the z architecture are in books. Think of a book as being a physical card which you can plug/unplug from a rack.
  • You can have multiple books.
  • Each book has one or more chips
  • Each chip has one or more CPUs.
  • There is cache (RAM) for each CPU
  • There is cache for each chip
  • There is cache for each book
  • At a hardware level, when you are updating a real page, it is locked to your CPU.
  • If another CPU wants to use the same real page, it has to send a message to the holding CPU requesting exclusive use
  • The physical distance between two CPUs on the same chip is measured in millimeters
  • The distance between two CPUs in the same book is measured in centimeters
  • The distance between two CPUs in different books could be a metre.
  • The time to send information depends on the distance it has to travel. Sharing data between data two CPUs on the same chip will be faster than sharing data between CPUs in different books.

Some instructions like compare and swap are used for serialising access to one field.

  • Load register 4 with value from data field. This could be slow if the real page has to be got from another CPU. It could be fast it the storage is in the CPU, chip or book cache.
  • Load register 5 with new value
  • Compare and swap does
    • Get the exclusive lock on the data field
    • If the value of the data field matches the value in register 4 (the compare)
    • then replace it with the value in register 5 (the swap)
    • else say mismatch
    • Unlock.

These instruction (especially the first load) can take a long time, especially if the data field is “owned” by another CPU, and the hardware has to go and get the storage from another CPU in a different book, a metre away.

A common technique for Compare and Swap is to have a common trace table. Each thread gets the next free element, and sets the next free. With many CPU’s actively using the Compare and Swap, these instructions could be a major bottleneck.

A better design is to give each application thread their own trace buffer to avoid the need for a serialisation instruction, and so there is no contention.

Storage contention

We finally get to the bit with the analogy to explain storage contention

You have an array of counters with one slot for each potential thread. You have 16 threads, your array is size 16.

Each thread updates its counter regularly.

Imaging you are sitting in a class room listening to me lecture about performance and storage contention.

I have a sheet of paper with 16 boxes drawn on it, one per person (equivalent to one per thread).
I pick a person in the front row, and ask them to make a tick on the page in their box every 5 seconds.

Tick, tick, tick … easy

Now I introduce a second person and it gets harder. The first person make a tick – I then walk the piece of paper across the classroom to the second person, who makes a tick. I walk back to the first, who makes another tick etc

This will be very slow.

It gets worse. My colleague is giving the same lecture upstairs. I now do my two people, then go up a floor, so someone in the other classroom can make a mark. I then go back down to my class room and my people (who have been waiting for me) can then make their ticks.

How to solve the contention?

The obvious answer is to give each person their own page, and there is no contention. In hardware terms it might be a 4KB page – or it may be a 256 cache line.

I love this analogy; it has many levels of truth.

Getting from C enumerations to Python dicts.

I wanted to create Python enumerates from C code. For example with system ssl, there is a datatype x509_attribute_type.

This has a definition

typedef enum {                                                                    
x509_attr_unknown = 0,
x509_attr_name = 1, /* 2.5.4.41 */
x509_attr_surname = 2, /* 2.5.4.4 */
x509_attr_givenName = 3, /* 2.5.4.42 */
x509_attr_initials = 4, /* 2.5.4.43 */
...
} x509_attribute_type;

I wanted to created

x509_attribute_type = {
"x509_attr_unknown" : 0,
"x509_attr_name" : 1,
"x509_attr_surname" : 2,
"x509_attr_givenName" : 3,
"x509_attr_initials" : 4,
...
}

I’ve done this using ISPF macros, but thought it would be easier(!) to automate it.

There is a standard way for compilers to products information for debuggers to understand the structure of programs. DWARF is a debugging information file format used by many compilers and debuggers to support source level debugging. The data is stored internally using Executable and Linkable Format(ELF).

Getting the DWARF file

To get the structures into the DWARF file, it looks like you have to use the structure, that is, if you #include a file, by default the definitions are not stored in the DWARF file.

When I used

#include <gskcms.h>
...
x509_attribute_type aaa;
x509_name_type nt;
x509_string_type st;
x509_ecurve_type et;
int l = sizeof(aaa) sizeof(nt) + sizeof(st) + sizeof(et);

I got the structures x509_attribute_type etc in the DWARF file.

Compiling the file

I used USS xlc command with

xlc ...-Wc,debug(FORMAT(DWARF),level(9))... abc.c

or

/bin/xlclang -v -qlist=d.lst -qsource qdebug=format=dwarf -g -c abc.c -o abc.o

This created a file abc.dbg

The .dbg file includes an eye catcher of ELF (in ASCII)

I downloaded the file in binary to Linux.

There are various packages which are meant to be able to process the file. The only one I got to work successfully was dwarfdump. The Linux version has many options to specify what data you want to select and how you want to report it. dwarfdump reported some errors, but I got most of the information out.

readelf displays some of the information in the file, but I could not get it to display the information about the variables.

What does the output from dwarfdump look like?

The format has changed slightly since I first used this a year or so ago. The data are not always aligned on the same columns, and values like <146> and 2426 (used as a locator id) are now hexadecimal offsets.

The older format

<1>< 2426>      DW_TAG_subprogram
DW_AT_type <146>
DW_AT_name printCert
DW_AT_external yes
...
<2>< 2456> DW_TAG_formal_parameter
DW_AT_name id
DW_AT_type <10387>
...

The newer format

< 1><0x0000132d>    DW_TAG_typedef
DW_AT_type <0x00001349>
DW_AT_name x509_attribute_type
DW_AT_decl_file 0x00000002
DW_AT_decl_line 0x000000a7
DW_AT_decl_column 0x00000003
< 1><0x00001349> DW_TAG_enumeration_type
DW_AT_name __3
DW_AT_byte_size 0x00000002
DW_AT_decl_file 0x00000002
DW_AT_decl_line 0x00000091
DW_AT_decl_column 0x0000000e
DW_AT_sibling <0x00001553>
< 2><0x00001356> DW_TAG_enumerator
DW_AT_name x509_attr_unknown
DW_AT_const_value 0
< 2><0x0000136a> DW_TAG_enumerator
DW_AT_name x509_attr_name
DW_AT_const_value 1

Some of the fields are obvious… others are more cyptic, with varying levels of indirection.

  • <1><0x0000132d> is a high level object <1> with id <0x0000132d>
    • DW_TAG_typedef is a typedef
    • DW_AT_type <0x00001349> see <0x00001349> for the definition (below)
    • DW_AT_name x509_attribute_type is the name of the typedef
    • DW_AT_decl_file 0x00000002 there is a file definition #2… but I could not find it
    • DW_AT_decl_line 0x000000a7 the position within the file
    • DW_AT_decl_column 0x00000003
  • < 1><0x00001349> DW_TAG_enumeration_type. This is referred to by the previous element
    • DW_AT_name __3 this an internally generated name
  • < 2><0x00001356> DW_TAG_enumerator This is part of the <1> <0x00001349> above. It is an enumerator.
    • DW_AT_name x509_attr_unknown this is the label of the value
    • DW_AT_const_value 0 with value 0
  • the next is label x509_attr_name with value 1

Other interesting data

I have a function

int colin(char * cinput, gsk_buffer * binput )
{
...
}
and
typedef struct _gsk_data_buffer {
gsk_size length;
void * data;
} gsk_data_buffer, gsk_buffer;

Breaking this down into its parts, there is an entry in the DWARF output for “int”, “colin”, “*”, “char”, “cinput”, “*”, gsk_buffer (which has levels within it), “binput”

< 1><0x0000009a>    DW_TAG_base_type
DW_AT_name int
DW_AT_encoding DW_ATE_signed
DW_AT_byte_size 0x00000004

< 1><0x00000142> DW_TAG_subprogram
DW_AT_type <0x0000009a>
DW_AT_name colin
DW_AT_external yes(1)
...
< 2><0x0000015c> DW_TAG_formal_parameter
DW_AT_name cinput
DW_AT_type <0x00001fa9>

< 2><0x00000173> DW_TAG_formal_parameter
DW_AT_name binput
DW_AT_type <0x00001fb5>
:

< 2><0x0000018a> DW_TAG_variable
DW_AT_name __func__
DW_AT_type <0x00001fc0>

for cinput

< 1><0x00001fa9>    DW_TAG_pointer_type
DW_AT_type <0x0000006e>
DW_AT_address_class 0x0000000a
< 1><0x0000006e> DW_TAG_base_type
DW_AT_name unsigned char
DW_AT_encoding DW_ATE_unsigned_char
DW_AT_byte_size 0x00000001

For binput

  • binput (1fbf) -> DW_TAG_pointer_type (1e2e) -> gsk_buffer (1e41) -> _gsk_data_buffer of length 8.
  • _gsk_data_buffer has two component
    • length _> gsk_size…
    • data (1faf) -> pointer_type (13c)-> unspecified type void

Processing the data in the file

Parse the data

For an entry like DW_AT_type <0x0000006e> which refers to a key of <0x0000006e>. The definition for this could be before after the current entry being processed.

I found it easiest to process the whole file (in Python), and build up a Python dictionary of each high level defintion.

I could then process the dict one element at a time, and know that all the elements it refers to are in the dict.

There are definitions like

typedef struct _x509_tbs_certificate { 
x509_version version;
gsk_buffer serialNumber;
x509_algorithm_identifier signature;
x509_name issuer;
x509_validity validity;
x509_name subject;
x509_public_key_info subjectPublicKeyInfo;
gsk_bitstring issuerUniqueId;
gsk_bitstring subjectUniqueId;
x509_extensions extensions;
gsk_octet rsvd[16];
} x509_tbs_certificate;

Some elements like x509_algorithm_identifier have a complex structure, which refer to other structures. I think the maximum depth for one of the structures was 6 levels deep.
If you are processing a structure you need to decide how many levels deep you process. For the enumeration I was just interested in the level < 1> and < 2> definitions and ignored any below that depth.

For each < 1> element, there may be zero or more < 2> elements. I added each < 2> element to a Python list within the < 1> element.

You may decide to ignore entries such as which file, row or column a definition is in.

My Python code to parse the file is

fn = "./dwarf.txt"
with open(fn) as fp:
# skip the stuff at the front
for line in fp:
if line[0:5] == "LOCAL":
break
all = {} # return the data here
for line in fp:
# if the line starts with ".debug" we're done
if line[0:6] == ".debug":
break
lhs = line[0:4]
# do not process nested requests
if line[0:1] == "<":
if lhs in ["< 1>","< 2>"]:
keep = True
else:
keep = False
else:
if line[0:1] != " ":
continue
if keep is False: # only within <1> and <2>
continue
# we now have records of interest < 1> and < 2> and records underneath them
if line[0:4] == "< 1>":
key = line[4:16].strip() # "< 123>"
kwds1 = {}
all[key] = kwds1
kwds1["l2"] = [] # empty list to which we add
state = 1 # <1> element
kwds1["type"] = line[20:-1]
elif line[0:4] == "< 2>":
kwds2 = {}
kwds1["l2"].append(kwds2)
kwds2["type"] =line[21:-1]
state = 2 # <2> element
else:
tag = line[0:47].strip()
value = line[47:-1].strip()
if state == 1:
kwds1[tag] = value
else:
kwds2[tag] = value

Process the data

I want to look for the enumerate definitions and only process those

print("=================================")
for e in all:
d = all[e]
if d["type"] == "DW_TAG_typedef": # only these
print(d["DW_AT_name"],"{")
at_type = d["DW_AT_type" ] # get the key of the value
l = all[at_type]["l2"] # and the <2> elements within it
for ll in l:
if "DW_AT_const_value" in ll:
print(' "'+ll["DW_AT_name"]+'":',ll["DW_AT_const_value"])
print("}")

This produced code like

x509_attribute_type = {
"x509_attr_unknown" : 0,
"x509_attr_name" : 1,
"x509_attr_surname" : 2,
"x509_attr_givenName" : 3,
"x509_attr_initials" : 4,
...
}

You can do more advanced things if you want to, for example create structures for Python Struct (struct — Interpret bytes as packed binary data) to build control blocks. With this you can pass in a dict of names, and the struct definitions, and it converts the bytes into the specified definitions ( int, char, byte etc) with the correct bigend/little-end processing etc.

Making WordPress code blocks display the right size

I started using WordPress’s block code with syntax highlighting. Unfortunately the text was being displayed too large.

What I wanted

x509_attribute_type = {
"x509_attr_unknown" : 0,
"x509_attr_name" : 1,
"x509_attr_surname" : 2,
"x509_attr_givenName" : 3,
"x509_attr_initials" : 4,
...
}

What I got

x509_attribute_type = {
"x509_attr_unknown" : 0,
"x509_attr_name" : 1,
"x509_attr_surname" : 2,
"x509_attr_givenName" : 3,
"x509_attr_initials" : 4,
...
}

There were “helpful” suggestions such as changing the CSS, but these did not work for me.
What did work (but involved several clicks) is

  • create a pre-formatted block
  • paste the data
  • in the FONT SIZE box on the right hand side click S
  • go back to your block, and make it into a code block
  • from the pull down list of languages, select the language you want
  • It should not display the code in the right sized font

What the above actions have done is to put <pre class=”wp-block-code has-small-font-size”></pre> around the code snippet, which is another solution.

Using irrseq00 to extract profile information from RACF

IRRSEQ00 also known as R_ADMIN can be used by an application to issue RACF commands, or extract information from RACF. It is used by the Python interface to RACF pysear.

Using this was not difficult – but has it challenges (including a designed storage leak!).

I also had a side visit into That’s strange – the compile worked.

Challenges

The documentation explains how to search through the profiles.

The notes say

When using extract-next to return all general resource profiles for a given class, all the discrete profiles are returned, followed by the generic profiles. An output flag indicates if the returned profile is generic. A flag can be specified in the parameter list to request only the generic profiles in a given class. If only the discrete profiles are desired, check the output flag indicating whether the returned profile is generic. If it is, ignore the entry and terminate your extract-next processing.

  • To search for all of the profiles, specify a single blank as the name, and use the extract_next value.
  • There are discrete and generic profiles. If you specify flag bit 0x20000000 For extract-next requests: return the next alphabetic generic profile. This will not retrieve discrete profiles. If you do not specify this bit, you get all profiles.

This is where it gets hard.

  • The output is returned in a buffer allocated by IRRSEQ00. This is the same format as the control block used to specify parameters. After a successful request, it will contain the profile, and may return all of the segments (such as a userid’s TSO segment depending on the option specified).
  • Extract the information you are interested in.
  • Use this data as input to the irrseq00 call. I set used pParms = buffer;
  • After the next IRRSEQ00 request FREE THE STORAGE pointed to by pParms.
  • Use the data returned to you in the new buffer.
  • Loop

The problems with this are

The documentation says

The output storage is obtained in the subpool specified by the caller in the Out_message_subpool parameter. It is the responsibility of the caller to free this storage.

I do not know how to issued a FREEMAIN/STORAGE request from a C program! Because you cannot free the z/OS storage from C, you effectively get a storage leak!

I expect the developers did not think of this problem. Other RACF calls get the back in the same control block, and you get a return code if the control block is too small.

I solved this by having some assembler code in my C program see Putting assembler code inside a C program.

My program

Declare constants

 #pragma linkage(IRRSEQ00 ,OS) 
// Include standard libraries */
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <inttypes.h>

int main( int argc, char *argv??(??))
{
// this structure taken from pysear is the parameter block
typedef struct {
char eyecatcher[4]; // 'PXTR'
uint32_t result_buffer_length; // result buffer length
uint8_t subpool; // subpool of result buffer
uint8_t version; // parameter list version
uint8_t reserved_1[2]; // reserved
char class_name[8]; // class name - upper case, blank pad
uint32_t profile_name_length; // length of profile name
char reserved_2[2]; // reserved
char volume[6]; // volume (for data set extract)
char reserved_3[4]; // reserved
uint32_t flags; // see flag constants below
uint32_t segment_count; // number of segments
char reserved_4[16]; // reserved
// start of extracted data
char data[1];
} generic_extract_parms_results_t;
// Note: This structure is used for both input & output.

Set up the irrseq00 parameters

I want to find all profiles for class ACCTNUM. You specify a starting profile of one blank, and use the get next request.

char work_area[1024]; 
int rc;
long SAF_RC,RACF_RC,RACF_RS;
long ALET = 0;

char Function_code = 0x20; // Extract next general resource profile
// RACF is ignored for problem state
char RACF_userid[9];
char * ACEE_ptr = 0;
RACF_userid[0]=0; // set length to 0

char Out_message_subpool = 1;
char * Out_message_string; // returned by program

generic_extract_parms_results_t parms;
memset(&parms,0,sizeof(parms));
memcpy(&parms.eyecatcher,"PXTR",4);
parms.version = 0;
memcpy(&parms.class_name,"ACCTNUM ",8);
parms.profile_name_length = 1;
parms.data[0] =' ';

char *pParms = (char *) & parms;
Function_code = 0x20; // get next resource
int i;
generic_extract_parms_results_t * pGEP;

Loop round getting the data

I knew there were only 3 discrete profiles and one generic (with a “*” in it) .

For extract-next requests, SAF_RC = 4, RACFRC = 4 and RACFRS = 4, means. there are no more profiles that the caller is authorised to extract.

 for (i=0;i < 6;i++)
{
parms.flags = 0x04000000; // get next + base only
rc=IRRSEQ00(
&work_area,
&ALET , &SAF_RC,
&ALET , &RACF_RC,
&ALET , &RACF_RS,
&Function_code,
pParms,
&RACF_userid,
&ACEE_ptr,
&Out_message_subpool,
&Out_message_string
);
pParms = Out_message_string;

pGEP = (generic_extract_parms_results_t *) pParms;
if (RACF_RC == 0 )
{
printf("return code SAF %d RACF %d RS %d %2.2x %8.8x %*.*s \n",
SAF_RC,RACF_RC,RACF_RS, Function_code, pGEP-> flags,
pGEP->profile_name_length,
pGEP-> profile_name_length, pGEP->data);
}
else
{
printf("return code SAF %d RACF %d RS %d \n",
SAF_RC,RACF_RC,RACF_RS );
break;
}
}
return 8;
}

The results were

return code SAF 0 RACF 0 RS 0 20 00000000  ACCT# 
return code SAF 0 RACF 0 RS 0 20 00000000 IZUACCT
return code SAF 0 RACF 0 RS 0 20 10000000 TESTGEN*
return code SAF 4 RACF 4 RS 4

For TESTGEN* the flag is 0x10 which is On output: indicates that the profile returned by RACF is generic. When using extract-next to cycle through profiles, the caller should not alter this bit. For the others this bit is off, meaning the profiles are discrete.