That’s strange – the compile worked.

I was setting up a script to compile some C code in Unix Services, and it worked – when I expected the bind to fail because I had not specified where to find a stub file.

How to compile the source

I used a shell script to compile and bind the source. I was surprised to see that it worked, because it needed some Linkedit stubs from CSSLIB. I thought I needed

export _C89_LSYSLIB=”CEE.SCEELKEX:CEE.SCEELKED:CBC.SCCNOBJ:SYS1.CSSLIB”

but it worked without it.

The script

name=irrseq 

export _C89_CCMODE=1
# export _C89_LSYSLIB="CEE.SCEELKEX:CEE.SCEELKED:CBC.SCCNOBJ:SYS1.CSSLIB"
p1="-Wc,arch(8),target(zOSV2R3),list,source,ilp32,gonum,asm,float(ieee)"
p5=" -I. "
p8="-Wc,LIST(c.lst),SOURCE,NOWARN64,XREF,SHOWINC "

xlc $p1 $p5 $p8 -c $name.c -o $name.o
# now bind it
l1="-Wl,LIST,MAP,XREF "
/bin/xlc $name.o -o irrseq -V $l1 1>binder.out

The binder output had

XL_CONFIG=/bin/../usr/lpp/cbclib/xlc/etc/xlc.cfg:xlc 
-v -Wl,LIST,MAP,XREF irrseq.o -o./irrseq
STEPLIB=NONE
_C89_ACCEPTABLE_RC=4
_C89_PVERSION=0x42040000
_C89_PSYSIX=
_C89_PSYSLIB=CEE.SCEEOBJ:CEE.SCEECPP
_C89_LSYSLIB=CEE.SCEELKEX:CEE.SCEELKED:CBC.SCCNOBJ:SYS1.CSSLIB

Where did these come from? – I was interested in SYS1.CSSLIB. It came from xlc config file below.

xlc config file

By default the compile command uses a configuration file /usr/lpp/cbclib/xlc/etc/xlc.cfg .

The key parts of this file are

* FUNCTION: z/OS V2.4 XL C/C++ Compiler Configuration file
*
* Licensed Materials - Property of IBM
* 5650-ZOS Copyright IBM Corp. 2004, 2018.
* US Government Users Restricted Rights - Use, duplication or
* disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
*

* C compiler, extended mode
xlc: use = DEFLT
...* common definitions
DEFLT: cppcomp = /usr/lpp/cbclib/xlc/exe/ccndrvr
ccomp = /usr/lpp/cbclib/xlc/exe/ccndrvr
ipacomp = /usr/lpp/cbclib/xlc/exe/ccndrvr
ipa = /bin/c89
as = /bin/c89
ld_c = /bin/c89
ld_cpp = /bin/cxx
xlC = /usr/lpp/cbclib/xlc/bin/xlc
xlCcopt = -D_XOPEN_SOURCE
sysobj = cee.sceeobj:cee.sceecpp
syslib = cee.sceelkex:cee.sceelked:cbc.sccnobj:sys1.csslib
syslib_x = cee.sceebnd2:cbc.sccnobj:sys1.csslib
exportlist_c = NONE
exportlist_cpp = cee.sceelib(c128n):cbc.sclbsid(iostream,complex)
exportlist_c_x = cee.sceelib(celhs003,celhs001)
exportlist_cpp_x = ...
exportlist_c_64 = cee.sceelib(celqs003)
exportlist_cpp_64 = ...
steplib = NONE

Where the _x entries are for xplink.

It is easy to find the answer when you know the solution.

Note:

Without the export _C89_CCMODE=1

I got

IEW2763S DE07 FILE ASSOCIATED WITH DDNAME /0000002 CANNOT BE OPENED
BECAUSE THE FILE DOES NOT EXIST OR CANNOT BE CREATED.
IEW2302E 1031 THE DATA SET SPECIFIED BY DDNAME /0000002 COULD NOT BE
FOUND, AND THUS HAS NOT BEEN INCLUDED.
FSUM3065 The LINKEDIT step ended with return code 8.

Compiling in 64 bit

It was simple to change the script to compile it in 64 bit mode, but overall this didn’t work.

p1="-Wc,arch(8),target(zOSV2R3),list,source,lp64,gonum,asm,float(ieee)" 
...
l1="-Wl,LIST,MAP,XREF -q64 "

When I compiled in 64 bit mode, and tried to bind in 31/32 bit mode (omitting the -q64 option) I got messages like

 IEW2469E 9907 THE ATTRIBUTES OF A REFERENCE TO isprint FROM SECTION irrseq#C DO
NOT MATCH THE ATTRIBUTES OF THE TARGET SYMBOL. REASON 2
...
IEW2469E 9907 THE ATTRIBUTES OF A REFERENCE TO IRRSEQ00 FROM SECTION irrseq#C
DO NOT MATCH THE ATTRIBUTES OF THE TARGET SYMBOL. REASON 3

IEW2456E 9207 SYMBOL CELQSG03 UNRESOLVED. MEMBER COULD NOT BE INCLUDED FROM
THE DESIGNATED CALL LIBRARY.
...
IEW2470E 9511 ORDERED SECTION CEESTART NOT FOUND IN MODULE.
IEW2648E 5111 ENTRY CEESTART IS NOT A CSECT OR AN EXTERNAL NAME IN THE MODULE.

IEW2469E THE ATTRIBUTES OF A REFERENCE TO … FROM SECTION … DO
NOT MATCH THE ATTRIBUTES OF THE TARGET SYMBOL. REASON x

  • Reason 2 The xplink attributes of the reference and target do not match.
  • Reason 3 Either the reference or the target is in amode 64 and the amodes do not match. The IRRSEQ00 stub is only available in 31 bit mode, my program was 64 bit amode.

IEW2456E SYMBOL CELQINPL UNRESOLVED. MEMBER COULD NOT BE INCLUDED
FROM THE DESIGNATED CALL LIBRARY.

  • The compile in 64 bit mode generates an “include…” of the 64 bit stuff needed by C. Because the binder was in 31 bit, it used the 31 bit libraries – which did not have the specified include file. When you compile in 64 bit mode you need to bind with the 64 bit libraries. The compile command sorts all this out depending on the options.
  • The libraries used when binding in 64 bit mode are syslib_x = cee.sceebnd2:cbc.sccnobj:sys1.csslib. See the xlc config file above.

IEW2470E 9511 ORDERED SECTION CEESTART NOT FOUND IN MODULE.
IEW2648E 5111 ENTRY CEESTART IS NOT A CSECT OR AN EXTERNAL NAME IN THE MODULE.

  • Compiling in 64 bit mode, generates an entry point of CELQSTRT instead of CEESTART, so the binder instructions for 31 bit programs specifying the entry point of CEESTART will fail.

Overall

Because IRRSEQ00 only comes in a 31 bit flavour, and not a 64 bit flavour, I could not call it directly from a 64 bit program, and I had to use a 32 bit compile and bind.

The Python interface to RACF is great.

The Python package pysear to work with RACF is great. The source is on github, and the documentation starts here. It is well documented, and there are good examples.

I’ve managed to do a lot of processing with very little of my own code.

One project I’ve been meaning to do for a time is to extract the contents of a RACF database and compare them with a different database and show the differences. IBM provides a batch program, and a very large Rexx exec. This has some bugs and is not very nice to use. There is a Rexx interface, which worked, but I found I was writing a lot of code. Then I found the pysear code.

Background

The data returned for userids (and other types of data) have segments.
You can display the base segment for a user.

tso lu colin

To display the tso base segment

tso lu colin tso

Field names returned by pysear have the segment name as a prefix, for example base:max_incorrect_password_attempts.

My first query

What are the active classes in RACF?

See the example.

from sear import sear
import json
import sys
result = sear(
    {
        "operation": "extract",
        "admin_type": "racf-options"
    },
)
json_data = json.dumps(result.result   , indent=2)
print(json_data)

For error handling see error handling

This produces output like

{
"profile": {
"base": {
"base:active_classes": [
"DATASET",
"USER",...
],
"base:add_creator_to_access_list": true,
...
"base:max_incorrect_password_attempts": 3,

...
}

To process the active classes one at a time you need code like

for ac in result.result["profile"]["base"]["base:active_classes"]:
    print("Active class:",ac)

The returned attributes are called traits. See here for the traits for RACF options. The traits show

Traitbase:max_incorrect_password_attempts
RACF Keyrevoke
Data TypesString
Operators Allowed“set”,”delete”
Supported Operations“alter”,”extract”

For this attribute because it is a single valued object, you can set it or delete it.

You can use this attribute for example

result = sear(
    {
        "operation": "alter",
        "admin_type": "racf-options",
        "traits": {
            "base:max_incorrect_password_attempts": 5,
        },
    },
)

The trait “base:active_classes” is list of classes [“DATASET”, “USER”,…]

The trait is

Traitbase:active_classes
RACF Keyclassact
Data Typesstring
Operators Allowed"add", "remove"
Supported Operations"alter", "extract"

Because it is a list, you can add or remove an element, you do not use set or delete which would replace the whole list.

Some traits, such as use counts, have Operators Allowed of N/A. You can only extract and display the information.

My second query

What are the userids in RACF?

The traits are listed here, and code examples are here.

I used

from sear import sear
import json

# get all userids begining with ZWE
users = sear(
    {
        "operation": "search",
        "admin_type": "user",
        "userid_filter": "ZWE",
    },
)
profiles  = users.result["profiles"]
# Now process each profile in turn.
# because this is for userid profiles we need admin_type=user and userid=....
for profile in profiles:
    user = sear(
       {
          "operation": "extract",
          "admin_type": "user",
          "userid": profile,
       }, 
    )
    segments = user.result["profile"]
    #print("segment",segments)
    for segment in segments:   # eg base or omvs
      for w1,v1 in segments[segment].items():
          #print(w1,v1)
          #for w2,v2 in v1.items():
          #  print(w1,w2,v2 )
          json_data = json.dumps(v1  , indent=2)
          print(w1,json_data)

This gave

==PROFILE=== ZWESIUSR
base:auditor false
base:automatic_dataset_protection false
base:create_date "05/06/20"
base:default_group "ZWEADMIN"
base:group_connections [
  {
    ...
    "base:group_connection_group": "IZUADMIN",
    ...
    "base:group_connection_owner": "IBMUSER",
    ...
},
{
    ...
    "base:group_connection_group": "IZUUSER",
   ...
}
...
omvs:default_shell "/bin/sh"
omvs:home_directory "/apps/zowe/v10/home/zwesiusr"
omvs:uid 990017
===PROFILE=== ZWESVUSR
...

Notes on using search and extract

If you use “operation”: “search” you need a ….._filter. If you use extract you use the data type directly, such as “userid”:…

Processing resources

You can process RACF resources. For example a OPERCMDS provide for MVS.DISPLAY commands.

The sear command need a “class”:…. value, for example

result = sear(
{
"operation": "search",
"admin_type": "resource",
"class": "OPERCMDS",
"resource_filter": "MVS.**",
},
)
result = sear(
{
"operation": "extract",
"admin_type": "resource",
"resource": "MVS.DISPLAY",
"class": "Opercmds",
},
)

The value of the class is converted to upper case.

Changing a profile

If you change a profile, for example to issue the PERMIT command

from sear import sear
import json

result = sear(
    {   "operation": "alter",
        "admin_type": "permission",
        "resource": "MVS.DISPLAY.*",
        "userid": "ADCDG",
        "traits": {
          "base:access": "CONTROL"
        },
        "class": "OPERCMDS"

    },
)
json_data = json.dumps(result.result   , indent=2)
print(json_data)

The output was

{
  "commands": [
    {
      "command": "PERMIT MVS.DISPLAY.* CLASS(OPERCMDS)ACCESS (CONTROL) ID(ADCDG)",
      "messages": [
        "ICH06011I RACLISTED PROFILES FOR OPERCMDS WILL NOT REFLECT THE UPDATE(S) UNTIL A SETROPTS REFRESH IS ISSUED"
      ]
    }
  ],
  "return_codes": {
    "racf_reason_code": 0,
    "racf_return_code": 0,
    "saf_return_code": 0,
    "sear_return_code": 0
  }
}

Error handling

Return codes and errors messages

There are two layers of error handling.

  • Invalid requests – problems detected by pysear
  • Non zero return code from the underlying RACF code.

If pysear detects a problem it returns it in

result.result.get("errors") 

For example you have specified an invalid parameter such as “userzzz“:”MINE”

If you do not have this field, then the request was passed to the RACF service. This returns multiple values. See IRRSMO00 return and reason codes. There will be values for

  • SAF return code
  • RACF return code
  • RACF reason code
  • sear return code.

If the RACF return code is zero then the request was successful.

To make error handling easier – and have one error handling for all requests I used


try:
result = try_sear(search)
except Exception as ex:
print("Exception-Colin Line112:",ex)
quit()

Where try_sear was

def try_sear(data):
# execute the request
result = sear(data)
if result.result.get("errors") != None:
print("Request:",result.request)
print("Error with request:",result.result["errors"])
raise ValueError("errors")
elif (result.result["return_codes"]["racf_reason_code"] != 0):
rcs = result.result["return_codes"]
print("SAF Return code",rcs["saf_return_code"],
"RACF Return code", rcs["racf_return_code"],
"RACF Reason code",["racf_reason_code"],
)
raise ValueError("return codes")
return result

Overall

This interface is very easy to do.
I use it to extract definitions from one RACF database, save them as JSON files. Repeat with a different (historical) RACF database, then compare the two JSON files to see the differences.

Note: The sear command only works with the active database, so I had to make the historical database active, run the commands, and switch back to the current data base.

What RACF profile is used for a data set?

I was trying to find out why I had write access to a data set, when I was only expecting read access.

You can search for profiles

SEARCH CLASS(DATASET) MASK(COLIN)

gave me

COLIN.ENCR.* (G)
COLIN.ENCR.** (G)
COLIN.ENCRCLEA.* (G)
COLIN.ENCRDH.* (G)
COLIN.ENCR2.* (G)
COLIN.ENCR3.* (G)
COLIN.MQ944.** (G)

The command (CLASS(DATASET) is the default, so can be omitted)

SEARCH MASK(COLIN,44)

gave me profiles starting with COLIN containing 44

COLIN.MQ944.** (G)

List a profile LISTDSD

tso listdsd dataset('COLIN.MQ944.**')

gave

INFORMATION FOR DATASET COLIN.MQ944.** (G)

LEVEL OWNER UNIVERSAL ACCESS WARNING ERASE
----- -------- ---------------- ------- -----
00 COLIN NONE NO NO
b...
YOUR ACCESS CREATION GROUP DATASET TYPE
----------- -------------- ------------
ALTER SYS1 NON-VSAM
tso listdsd dataset('COLIN.MQ944.SOURCE')

gave

ICH35003I NO RACF DESCRIPTION FOUND FOR COLIN.MQ944.SOURCE

You need the generic option

tso listdsd dataset('COLIN.MQ944.SOURCE') generic

gave

INFORMATION FOR DATASET COLIN.MQ944.** (G)

LEVEL OWNER UNIVERSAL ACCESS WARNING ERASE
----- -------- ---------------- ------- -----
00 COLIN NONE NO NO
...
YOUR ACCESS CREATION GROUP DATASET TYPE
----------- -------------- ------------
ALTER SYS1 NON-VSAM

This says

If I was to use data set ‘COLIN.MQ944.SOURCE’, RACF would check profile COLIN.MQ944.**, and I would have ALTER access to it.

What is RACF GLOBAL….

With RACF you can define a profile and give userids access to it. You can also define a global profile for high used datasets, so the profile is cached, and no I/O is needed to the RACF dataset.

Defined a normal profile

 ADDSD  'COLIN.Z31B.*' UACC(READ)                         
PERMIT 'COLIN.Z31B.*' ID(IBMUSER,COLIN) ACCESS(CONTROL)

You can list it

LISTDSD DATASET('COLIN.Z31B.*') ALL

and delete it

DELDSD DATASET('COLIN.Z31B.*') 

For some resources used very frequently, you can cache definitions in memory. These are called GLOBAL definitions. When a check is made for a userid to access a resource, if the definition is a global definition, then there should be no RACF database I/O, and should be fast.

Define a global resource

You need to set up the global resource before you can use it. See the IBM documentation.

Example 1 contains

SETROPTS GLOBAL(DATASET)
RDEFINE GLOBAL DATASET
SETROPTS GLOBAL(DATASET) REFRESH

and

RALTER   GLOBAL DATASET ADDMEM('SYS1.HELP'/READ)
ADDSD 'SYS1.HELP' UACC(READ)
SETROPTS GLOBAL(DATASET) REFRESH

to define a resource. It gives a default of read access to the data set SYS1.HELP.

You can display the contents of the global data set class

rlist global dataset

which gives

CLASS      NAME
----- ----
GLOBAL DATASET
...
RESOURCES IN GROUP
--------- -- -----
SYS1.HELP/READ
...

You can delete a global profile

RALTER   GLOBAL DATASET DELMEM('SYS1.HELP'/READ)
SETROPTS GLOBAL(DATASET)

You can remove the global dataset class if there are no elements in the glas

RDElete  GLOBAL DATASET
SETROPTS NOGLOBAL(DATASET)
SETROPTS GLOBAL(DATASET) REFRESH

If you now list the global profile

rlist global dataset

gives

 ICH13003I DATASET NOT FOUND

I’m guessing that if you want READ access to the SYS1.HELP data set, the entry in the GLOBAL DATASET will be found. If you want UPDATE access to the SYS1.HELP data set, because there is no entry in the GLOBAL DATASET, checking will fall through to the normal profiles defines like ADDSD.

You do not need to configure the GLOBAL DATASET, but it can give performance benefits, if you are on a heavily used system. It is not enabled on my one person zD&Y system.

Beware

In the documentation it also defines a “normal” profile like “ADDSD
‘SYS1.HELP’ UACC(READ)”. I’m guessing that this is a fall back if someone deactivates the global dataset profiles.

So you should read the documentation and follow its instructions.

How do I get my client talking to the server with a signed certificate

Signed certificates are very common, but I was asked how I connected my laptop to my server, in the scenario “one up” from a trivial example.

Basic concepts

  • A private/public key pair are generated on a machine. The private stays on the machine (securely). The public key can be sent anywhere.
  • A certificate has ( amongst other stuff)
    • Your name
    • Address
    • Public key
    • Validity dates

Getting a signed certificate

When you create a certificate: it does a checksum of the contents of the certificate, encrypts the checksum with your private key, and attaches this encrypted value to the certificate.

Conceptually, you go to your favourite Certificate Authority (UKCA) building and they Sign it

  • They check your passport and gas bill with the details of your certificate.
  • They attach the UKCA public key to your certificate.
  • They do a checksum of the combined documents.
  • They encrypt the checksum with the the UKCA private key, and stick this on the combined document.

You now have a signed certificate, which you can send it to anyone who cares.

Using it

When I receive it, and use it

  • my system compares my copy of the UKCA public certificate with the one in your certificate – it matches!
  • Using (either) UKCA public certificate – decrypt the encrypted checksum
  • Do the same checksum calculation – and the two values should match.
  • If they match I know I can trust the information in the certificate.

This means the checking of the certificate requires the CA certificate that signed it.

To use a (Linux) certificate on z/OS you either need to

  • issue the RACF GENCERT command on the Linux .csr file, export it, then download it to Linux. The certificate will contain the z/OS CA’s certificate.
  • import the Linux CA certificate into RACF (This is the easy, do once solution.)

then

  • connect the CA certificate to your keyring, and usually restart your server.

Setting up my system

If the CA certificate is not on your system, you need to import it from a dataset.

You can use FTP, or use cut and paste to the dataset.

Once you have the CA certificate in your RACF database you can connect it to your keyring.

Create my Linux CA and copy it to z/OS

CA="docca256"
casubj=" -subj /C=GB/O=DOC/OU=CA/CN=LINUXDOCCA2564"
days="-days 1095"
rm $CA.pem $CA.key.pem

openssl ecparam -name prime256v1 -genkey -noout -out $CA.key.pem

openssl req -x509 -sha384 -config caca.config -key $CA.key.pem -keyform pem -nodes $casubj -out $CA.pem -outform PEM $days

openssl x509 -in $CA.pem -text -noout|less

Where my caca.config has

####################################################################
[ req ]
distinguished_name = ca_distinguished_name
x509_extensions = ca_extensions
prompt = no

authorityKeyIdentifier = keyid:always,issuer:always

[ca_distinguished_name ]
[ ca_extensions ]

subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid:always
basicConstraints = critical,CA:TRUE, pathlen:0
keyUsage = keyCertSign, digitalSignature,cRLSign

Running the command gave

Certificate:
Data:
...
Issuer: C = GB, O = DOC, OU = CA, CN = LINUXDOCCA256
...
Subject: C = GB, O = DOC, OU = CA, CN = LINUXDOCCA256
...
X509v3 extensions:
...
X509v3 Basic Constraints: critical
CA:TRUE, pathlen:0
X509v3 Key Usage:
Digital Signature, Certificate Sign, CRL Sign
...

Where it has CA:TRUE and X509v3 Key Usage:Certificate Sign

Which allows this to be used to sign certificates.

Installing the CA certificate on z/OS

You need to copy the docca256.pem file from Linux to a z/OS dataset (Fixed block, lrecl 80, blksize 80) you can use FTP or cut and paste. I used dataset COLIN.DOCCA256.PEM.

Import it into z/OS, and connect it to the START1.MYRING keyring as a CERTAUTH.

//COLRACF  JOB 1,MSGCLASS=H 
//S1 EXEC PGM=IKJEFT01,REGION=0M
//SYSPRINT DD SYSOUT=*
//SYSTSPRT DD SYSOUT=*
//SYSTSIN DD *
RACDCERT CHECKCERT('COLIN.DOCCA256.PEM')

*RACDCERT DELETE (LABEL('LINUXDOCA256')) CERTAUTH
RACDCERT ADD('COLIN.DOCCA256.PEM') -
CERTAUTH WITHLABEL('LINUXDOCA256') TRUST

RACDCERT CONNECT(CERTAUTH LABEL('LINUXDOCA256') -
RING(MYRING) USAGE(CERTAUTH)) ID(START1)

SETROPTS RACLIST(DIGTCERT,DIGTRING ) refresh
/*

Once you have connected the CA to the keyring, you need to get the server to reread the keyring, or restart the server.

Getting my Linux certificate signed by z/OS

This works, but is a bit tedious for a large number of certificates.

I created a certificate request file using

timeout="--connect-timeout 10"
enddate="-enddate 20290130164600Z"

ext="-extensions end_user"

name="docec384Pass2"
key="$name.key.pem"
cert="$name.pem"
p12="$name.p12"
subj="-subj /C=GB/O=Doc3/CN="$name
rm $name.key.pem
rm $name.csr
rm $name.pem
passin="-passin file:password.file"
passout="-passout file:password.file"
md="-md sha384"
policy="-policy signing_policy"
caconfig="-config ca2.config"
caextensions="-extensions clientServer"


openssl ecparam -name secp384r1 -genkey -noout -out $name.key.pem
openssl req -config openssl.config -new -key $key -out $name.csr -outform PEM -$subj $passin $passout

The certificate request file docec384Pass2.csr looks like

-----BEGIN CERTIFICATE REQUEST----- 
MIIBpzCCAS0CAQAwNDELMAkGA1UEBhMCR0IxDTALBgNVBAoMBERvYzMxFjAUBgNV
...
Tmmvu/nqe0wTc/jJuC4c/QJt+BQ1SYMxz9LiYjBXZuOZkpDdUieZDbbEew==
-----END CERTIFICATE REQUEST-----

With words CERTIFICATE REQUEST in the header and trailer records.

Create a dataset(COLIN.DOCLCERT.CSR) with the contents. It needs to be a sequential FB, LRECL 80 dataset.

  • Delete the old one
  • Generate the certificate using the information in the .csr. Sign it with the z/OS CA certificate
  • Export it to a dataset.
//IBMRACF2 JOB 1,MSGCLASS=H 
//S1 EXEC PGM=IKJEFT01,REGION=0M
//SYSPRINT DD SYSOUT=*
//SYSTSPRT DD SYSOUT=*
//SYSTSIN DD *

RACDCERT ID(COLIN ) DELETE(LABEL('LINUXCERT'))

RACDCERT ID(COLIN) GENCERT('COLIN.DOCLCERT.CSR') -
SIGNWITH (CERTAUTH LABEL('DOCZOSCA')) -
WITHLABEL('LINUXCERT')

RACDCERT ID(COLIN) LIST(label('LINUXCERT'))
RACDCERT EXPORT(label('LINUXCERT')) -
id(COLIN) -
format(CERTB64 ) -
password('password') -
DSN('COLIN.DOCLCERT.PEM' )

SETROPTS RACLIST(DIGTCERT,DIGTRING ) refresh

Then you can download COLIN.DOCLCERT.PEM to a file on Linux and use it. I used cut and paste to create a file docec384Pass2.zpem

I used it like

set -x 
name="colinpaice"
name="colinpaice"
name="docec384Pass2"
insecure=" "
insecure="--insecure"
timeout="--connect-timeout 100"
url="https://10.1.1.2:10443"
trace="--trace curl.trace.txt"

cert="--cert ./$name.zpem:password"
key="--key $name.key.pem"

curl -v $cert $key $url --verbose $timeout $insecure --tlsv1.2 $trace

Using wireshark I can see CA certificates being send from z/OS, and the docec384Pass2.lpem used; signed by a z/OS CA certificate.

Using the certificate in the Chrome browser.

  • In Chrome settings, search for cert.
  • Click security
  • Scroll down to Manage certificates, and select it
  • Select customised
  • Select import, and then select the file.
    • When I generated the file with the Linux CA it had a file type of .pem
    • When I signed it on z/OS, then downloaded it with a type of.zpem, I had to select all files (because the defaults are *.pem,*.csr,*.der..)

When tracing a job it helps to trace the correct address space.

The title When tracing a job it helps to trace the correct address space is a clue – it looks obvious, but the problem was actually subtle.

The scenario

I was testing the new version of Zowe, and one of the components failed to start because it could not find a keyring. Other components could find it ok. I did a RACF trace and there were no records. The question is why were there no records?

The execution environment.

I start Zowe with S ZOWE33. This spawns some processes such as ZOWE335. This runs a Bash script which starts a Java program.

I start a GTF trace with

s gtf.gtf,m=gtfracf
#set trace(callable(type(41)),jobname(Zowe*))

Where callable type 41 is for r_datalib services to access a keyring.

No records were produced

What is the problem?
Have a few minute pause to think about it.

Solution

After 3 days I stumbled on the solution – having noticed, but ignored the evidence. I wondered if the Java code to process keyrings, did not use the R_datalib API, I wondered if Java 21 uses a different jar file for processing keyrings – yes – but this didn’t solve the problem.

The solution was I should have been tracing job ZWE33CS! Whoa – where did that come from?

The Java program was started with

_BPX_JOBNAME=ZWE33CS /usr/lpp/java/J21.0_64/bin/java

See here which says

When a new z/OS® UNIX process is started, it runs in a z/OS UNIX initiator (a BPXAS address space). By default, this address space has an assigned job name of userIDx, where userID is the user ID that started the process, and x is a decimal number. You can use the _BPX_JOBNAME environment variable to set the job name of the new process. Assigning a unique job name to each … process helps to identify the purpose of the process and makes it easier to group processes into a WLM service class.

If I use the command D A,L it lists all of the address spaces running on the system. I had seen the ZOWE33* ones, and also the ZWE* ones – but ignored the ZWE* ones. Once I knew the solution is was so obvious.

What RACF audit records are produced with pass tickets?

A pass ticket is a one time password for a userid, valid with the specified application. I’ve blogged Creating and using pass tickets on z/OS.

I’ve also blogged Of course – JCL subroutines is the answer about using ICETOOL to process RACF audit records in SMF.

Create a pass ticket

After I had created the pass ticket, I used the following JCL below to format the RACF SMF PTCREATE record.

//IBMPTICK JOB 1,MSGCLASS=H RESTART=PRINT 
// JCLLIB ORDER=COLIN.RACF.ICETOOL
// INCLUDE MEMBER=RACFSMF
//* INCLUDE MEMBER=PRINT
// INCLUDE MEMBER=ICETOOL
//TOOLIN DD *
COPY FROM(IN) TO(TEMP) USING(TEMP)
DISPLAY FROM(TEMP) LIST(PRINT) -
BLANK -
ON(63,8,CH) HEADER('USER ID') -
ON(184,8,CH) HEADER('FROMJOB') -
ON(14,8,CH) HEADER('RESULT') -
ON(23,8,CH) HEADER('TIME') -
ON(286,8,CH) HEADER('APPL ') -
ON(295,8,CH) HEADER('FORUSER ')
//TEMPCNTL DD *
INCLUDE COND=(5,8,CH,EQ,C'PTCREATE')
OPTION VLSHRT
//

to produce

USER ID    FROMJOB    RESULT     TIME       APPL       FORUSER     USER NAME   
-------- -------- -------- -------- -------- --------- ------------
ZWESVUSR ZWE1AZ SUCCESS 15:00:55 MQWEB COLIN ZOWE SERVER

Which shows from Job ZWE1AZ running with userid ZWESVUSR; it successfully created a pass ticket for userid COLIN with application MQWEB.

Show where the pass ticket is used

Once the pass ticket had been used, I used the following JCL to display the JOBINIT audit record.

//IBMJOBI  JOB 1,MSGCLASS=H RESTART=PRINT 
// JCLLIB ORDER=COLIN.RACF.ICETOOL
// INCLUDE MEMBER=RACFSMF
//* INCLUDE MEMBER=PRINT
// INCLUDE MEMBER=ICETOOL
//TOOLIN DD *
COPY FROM(IN) TO(TEMP) USING(TEMP)
DISPLAY FROM(TEMP) LIST(PRINT) -
BLANK -
ON(63,8,CH) HEADER('USER ID ') -
ON(14,8,CH) HEADER('RESULT ') -
ON(23,8,CH) HEADER('TIME ') -
ON(184,8,CH) HEADER('JOBNAME ') -
ON(286,8,CH) HEADER('APPL ') -
ON(631,8,CH) HEADER('SESSTYPE')-
ON(4604,4,CH) HEADER('PTOEVAL ') -
ON(4609,4,CH) HEADER('PSUCC ')
//TEMPCNTL DD *
INCLUDE COND=(5,8,CH,EQ,C'JOBINIT ')
OPTION VLSHRT
//

it produced the output

USER ID    RESULT     TIME       JOBNAME    APPL       SESSTYPE   PTOEVAL    PSUCC 
-------- -------- -------- -------- -------- -------- -------- --------
COLIN SUCCESSP 15:01:02 CSQ9WEB MQWEB OMVSSRV YES YES
COLIN RACINITD 15:01:02 CSQ9WEB MQWEB OMVSSRV NO NO

The first record shows,

  • in job CSQ9WEB,
  • running with APPLication id of MQWEB.
  • Sesstype OMVSSVR is a z/OS UNIX server application. See RACROUTE TYPE=VERIFY under SESSION=type.
  • userid COLIN SUCCCESSfully logged on with Passticket (SUCCESSP)
  • PTOEVAL – YES the supplied password was evaluated as a PassTicket,
  • PSUCC – YES the supplied password was evaluated successfully as a PassTicket.

The second record shows RACINITD (Successful RACINIT deletion) for the userid COLIN in the job CSQ9WEB, and the password was not used.

What’s the best way of connecting to an HTTPS server. Pass ticket or JWT?

This blog post was written as background to some blog posts on Zowe API-ML. It provides back ground knowledge for HTTPS servers running on z/OS, and I think it is useful on its own. Ive written about an MQWEB server – because I have configured this on my system.

The problem

I want to manage my z/OS queue manager from my Linux machine.I have several ways of doing it.

Which architecture?

  • Use an MQ client. Establish a client connect to the CHINIT, and use MQPUT and MQGET administration messages to the queue manager.
    • You can issue a command string, and get back a response string which you then have to parse
    • You can issue an MQINQ API request to programmatically query attributes, and get the values back in fields. No parsing, but you have to write a program to do the work.
  • Use the REST API. This is an HTTP request in a standard format into the MQWEB server.
    • You can issue a command string, and get back a response string which you then have to parse to extract the values.
    • You can issue a JSON object where the request is encoded in a URL, and get the response back in JSON format. It is trivial to extract individual fields from the returned data.

Connecting to the MQWEB server

If you use REST (over HTTPS) there are several ways of doing this

  • You can connect using userid and password. It may be OK to enter your password when you are at the keyboard, but not if you are using scripts and you may be away from your keyboard. If hackers get hold of the password, they have weeks to use it, before the password expires. You want to give your password once per session, not for every request.
  • You can connect using certificates, without specifying userid and password.
    • It needs a bit of set up at the server to map your certificate to a userid.
    • It takes some work to set up how to revoke your access, if you leave the company, or the certificate is compromised.
    • Your private key could be copied and used by hackers. There is discussion about reducing the validity period from over a year to 47 days. For some people this is still too long! You can have your private certificate on a dongle which you have to present when connecting to a back end. This reduces the risk of hackers using your private key.
  • You can connect with a both certificate and userid and password. The certificate is used to establish the TLS session, and the userid and password are used to logon to the application.
  • You can use a pass ticket. You issue a z/OS service which, if authorised, generates a one time password valid for 10 minutes or less. If hackers get hold of the pass ticket, they do not have long to be able to exploit it. The application generating the pass ticket, does not need the password of the userid, because the application has been set up as trusted.
  • You can use a JSON Web Token (JWT). This has some similarities with certificates. In the payload is a userid value and issuer value . I think of issuer as the domain the JWT has come from – it could be TEST or a company name. From the issuer value, and IP address range, you configure the server to specify a realm value. From the userid and realm you can map this to a userid on the server. This JWT can be valid from minutes to many hours (but under a day). The userid and realm mapping to a userid is different to certificate mapping to a userid.

Setting up a pass ticket

The passticket is used within the sysplex. It cannot be used outside of a sysplex. The pass ticket is a password – so needs to be validated against the RACF database.

The application that generates the pass ticket must be authorised to a profile for the application. For example, define the profile for the application TSO on system S0W1, the profile is TSOS0W1.

 RDEFINE PTKTDATA TSOS0W1 

and a profile to allow a userid to create a pass ticket for the application

RDEFINE PTKTDATA   IRRPTAUTH.TSOS0W1.*  UACC(NONE) 

PERMIT IRRPTAUTH.TSOS0W1.* CLASS(PTKTDATA) ID(COLIN) ACCESS(UPDATE)
PERMIT IRRPTAUTH.TSOS0W1.* CLASS(PTKTDATA) ID(IBMUSER)ACCESS(UPDATE)

Userids COLIN and IBMUSER can issue the callable service IRRSPK00 to generate a pass ticket for a user for the application TSOS0W1.

The output is a one-use password which has a validity of up to 10 minutes.

As an example, you could configure your MQWEB server to use profile name MQWEB, or CSQ9WEB.

How is it used

A typical scenario is for an application running on a work station to issue a request to an “application” on z/OS, like z/OSMF, to generate a pass ticket for a userid and application name.

The client on the work station then issues a request to the back end server, with the userid and pass ticket. If the back end server matches the application name then the pass ticket will be accepted as a password. The logon will fail if a different application is used, so a pass ticket for TSO cannot be used for MQWEB.
This is more secure than sending a userid and password up with every back end request, but there is additional work in creating the pass ticket, and two network flows.

This solution scales because very little work needs to be done on the work station, and there is some one-off work for the setup to generate the pass tickets.

JSON Web Tokens

See What are JSON Web Tokens and how do they work?

The JWT sent from the client has an expiry time. This can be from seconds to hours. I think it should be less than a day – perhaps a couple of hours at most. If a hacker has a copy of the JWT, they can use it until it expires.

The back end server needs to authenticate the token. It could do this by having a copy of the public certificate in the server’s keyring, or send a request down to the originator to validate it.

If validation is being done with public certificates, because the client’s private key is used to generate the JWT, the server needs a copy of the public certificate in the server’s keyring. This can make it hard to manage if there are many clients.

The Liberty web server has definitions like

<openidConnectClient id="RSCOOKIE" 
clientId="COLINCOO2"
realmName="zOSMF"
inboundPropagation="supported"
issuerIdentifier="zOSMF"
mapIdentityToRegistryUser="false"
signatureAlgorithm="RS384"
trustAliasName="CONN1.IZUDFLT"
trustStoreRef="defaultKeyStore"
userIdentifier="sub"
>
<authFilter id="afint">
<remoteAddress id="myAddress" ip="10.1.0.2" matchType="equals" />
</authFilter >

</openidConnectClient>

For this entry to be used various parameters need to match

  • The issuerIdentifier. This string identifies the client. It could be MEGABANK, TEST, or another string of your choice. It has to match what is in the JWT.
  • signatureAlgorithm. This matches the incoming JWT.
  • trustAliasName and trustStoreRef. These identify the certificate used to validate the certificate
  • remoteAddress. This is the address, or address range of the client’s IP addresses.

If you have 1000 client machines, you may need 1000 <openidConnectClient…/> definitions, because of the different certificate and IP addresses.

You may need 1000 entries in the RACMAP mapping of userid + realm to userid to be used on the server.

How is it used

You generate the JWT. There are different ways of doing this.

  • Use a service like z/OSMF
  • Use a service on your work station. I have used Python to do this. The program is 30 lines long and uses the Python jwt package

You get back a long string. You can see what is in the string by pasting the JWT in to jwt.io.
You pass this to the backend as a cookie. The cookie name depends on what the server is expecting. For example

'Authorization': "Bearer " + token

The JWT has limited access

For the server to use the JWT, it needs definitions to recognise it. If you have two back end servers

  • Both servers could be configured to accept the JWT
    • If the server specified a different REALM, then the mapped userid from the JWT could be different for each server because the userid/realm to userid mapping can be different.
  • One server is configured to accept the JWT
    • If only one server has the definitions for the JWT, then trying to use the JWT to logon to another server will fail.

RACF: Processing audit records

RACF can write to SMF data information about which userid logged on, what resources it accessed etc.. This can be used to check there are no unexpected accesses, and any violations are actioned.

The data tends to be “this userid had access to that resource”. It does not contain numeric values, such as response time.

Overview of SMF data

SMF data is a standard across z/OS. Each product has an SMF record type, and record subtypes are used to provide granularity within a product’s records. It is common for an SMF record to have sections within it. There may be 0 or more sections, and the sections can be of varying length. A SMF formatting program needs to build and report useful information from these sections.

There are many tools or products to process SMF records. Individual products may produce tools for formatting records, and there are external tools available to process the records.

Layout of RACF SMF records

The layout of the RACF SMF records are described in the publications. Record type 80: RACF processing record. It describes the field names, at which offsets, and how to interpret the data (what each bit means), this information is sufficient for someone to write a formatting program.


RACF also provides a formatter. The formatter runs as a SORT exit, and expands the data. For example in the SMF data is a bit saying a userid has the SPECIAL attribute. The formatter expands this and creates a column “SPECIAL” with the value YES or NO. This makes it easy to filter and display records, because you do not need to map bits to their meaning – the exit has done it for you. The layout of the expanded records is described here.

What tools format the records?

A common(free) tool for processing the records that RACF produces is an extension to DFSORT called ICETOOL. (The IBM sort modules all begin with ICE… so calling it ICETOOL was natural).

With ICETOOL you can say include rows where…., display and format these fields, count the occurrences of this field, and add page titles. You can quickly generate tabular reports.

The output file of the RACF exit has different format records mixed up. You need to filter by record type and display the subset of records you need.

JCL to extract the RACF SMF record and convert to the expanded format

//* DUMP THE SMF DATASETS 
// SET SMFPDS=SYS1.S0W1.MAN1
// SET SMFSDS=SYS1.S0W1.MAN2
//*
//SMFDUMP EXEC PGM=IFASMFDP,REGION=0M
//DUMPINA DD DSN=&SMFPDS,DISP=SHR,AMP=('BUFSP=65536')
//DUMPINB DD DSN=&SMFSDS,DISP=SHR,AMP=('BUFSP=65536')
//DUMPOUT DD DISP=(NEW,PASS),DSN=&RMF,SPACE=(CYL,(1,1))
//OUTDD DD DISP=(NEW,PASS),DSN=&OUTDD,
// SPACE=(CYL,(1,1)),DCB=(RECFM=VB,LRECL=12288)
//ADUPRINT DD SYSOUT=*
//*XMLFORM DD DSN=COLIN.XMLFORM,DISP=(MOD,CATLG),
//* SPACE=(CYL,(1,1)),DCB=(RECFM=VB,LRECL=12288)
//SYSPRINT DD SYSOUT=*

//SYSIN DD *
INDD(DUMPINA,OPTIONS(DUMP))
INDD(DUMPINB,OPTIONS(DUMP))
OUTDD(DUMPOUT, TYPE(30,80,81,83))
START(0000)
END(2359)
DATE(2025230,2025360)
ABEND(NORETRY)
USER2(IRRADU00)
USER3(IRRADU86)
/*

The RACF exits produce several files.

  • //OUTDD the expanded records are written to this dataset
  • //ADUPRINT contains information on how many of each record type the exit processed
  • //XMLFORM you can have it write data in XML format – for post processing

JCL to process the expanded records

The JCL below invokes the ICETOOL processing.

//S1      EXEC  PGM=ICETOOL,REGION=0M 
//DFSMSG DD SYSOUT=*
//TOOLMSG DD SYSOUT=*
//IN DD DISP=(SHR,PASS,DELETE),DSN=*.SMFDUMP.OUTDD
//TEMP DD DSN=&&TEMP3,DISP=(NEW,PASS),SPACE=(CYL,(1,1))
//PRINT DD SYSOUT=*

Where

  • //IN refers to the //OUTDD statement in the earlier step
  • //TEMP is an intermediate dataset. The sort program writes filtered records to this data set.
  • //PRINT is where the formatted output goes

ICETOOL Processing

The whole job is

//IBMJOBI  JOB 1,MSGCLASS=H RESTART=PRINT 
// JCLLIB ORDER=COLIN.RACF.ICETOOL
// INCLUDE MEMBER=RACFSMF
// INCLUDE MEMBER=PRINT
// INCLUDE MEMBER=ICETOOL
//TOOLIN DD *
COPY FROM(IN) TO(TEMP) USING(TEMP)
DISPLAY FROM(TEMP) LIST(PRINT) -
BLANK -
ON(5,8,CH) HEADER('EVENT') -
ON(63,8,CH) HEADER('USER ID') -
ON(14,8,CH) HEADER('RESULT') -
ON(23,8,CH) HEADER('TIME') -
ON(175,8,CH) HEADER('TERMINAL') -
ON(184,8,CH) HEADER('JOBNAME') -
ON(286,8,CH) HEADER('APPL ')
//TEMPCNTL DD *
INCLUDE COND=(5,8,CH,EQ,C'JOBINIT ')
OPTION VLSHRT
//

You have to be careful about the offsets. The record has a 4 byte length field on the front of each record. So the field in the layout of the expanded records described here is column 1 for length 8, in the JCL you specify column 5 of length 8. In the documentation the userid is columns 59 of length 8, in the JCL it is ON(63,8,CH).

The processing is ….

  • Copy the data from the dataset in //IN and copy it to the dataset in //TEMP. Using the sort instructions in TEMPCNTL. You take name name in USING(TEMP) and put CNTL on the end to locate the DDname.
  • The sort instructions say include only those records where columns 5 of length 8 of the record are the string ‘JOBINIT ‘ ( so columns 1 for length 8 in the mapping description).
  • The DISPLAY step copies record from the //TEMP dataset to the //PRINT DDNAME.
  • The ON() selects the data from the record, giving start column, length and formatting. For each field, it uses the specified column heading.

The output

In the //PRINT is

EVENT      USER ID    RESULT     TIME       TERMINAL   JOBNAME    APPL    
-------- -------- -------- -------- -------- -------- --------
JOBINIT START1 SUCCESS 09:54:11 SMFCLEAR
JOBINIT START1 TERM 09:54:20 SMFCLEAR
JOBINIT START1 TERM 09:55:13 CSQ9WEB
JOBINIT IBMUSER SUCCESS 10:12:14 LCL702 IBMUSER
JOBINIT IBMUSER SUCCESS 10:14:18 IBMJOBI
JOBINIT IBMUSER TERM 10:14:18 IBMJOBI
JOBINIT IBMUSER SUCCESS 10:21:39 IBMACCES
JOBINIT IBMUSER TERM 10:21:40 IBMACCES
JOBINIT IBMUSER SUCCESS 10:22:10 IBMACCES
JOBINIT IBMUSER TERM 10:22:11 IBMACCES
JOBINIT IBMUSER SUCCESS 10:23:01 IBMPASST
JOBINIT IBMUSER TERM 10:23:05 IBMPASST

Extending this

Knowing the format of the RACF extend record, you can add more fields to the reports.

You can filter which records you want. For example all records for userid START1. You can link filters with AND and OR statements.

Of course – JCL subroutines is the answer

I was processing RACF SMF records to report how clients were logging into MQ. This as a multi step job, and with each report I added, the JCL got more and more messy.
The requirements were simple

  • JCL to copy the dump the SMF data sets to a temporary data set
  • Run a tool against this data set to product the reports
    • There were reports for logon and logoff, and pass tickets, and access to profiles and…
  • I wanted it to be easy to use – and the JCL to fit on one screen!

All of this was easy except my JCL file got bigger with every report I wanted, and I spent a lot of time scrolling up and down, and changing the wrong file!

The solution was to use JCL subroutines – or INCLUDE JCL.

Examples

JCL to process the SMF data sets

You do not need to know what the JCL does – but you need to know it was in COLIN.JCL(RACFSMF)

//* DUMP THE SMF DATASETS 
// SET SMFPDS=SYS1.S0W1.MAN1
// SET SMFSDS=SYS1.S0W1.MAN3
//*
//SMFDUMP EXEC PGM=IFASMFDP,REGION=0M
//DUMPINA DD DSN=&SMFPDS,DISP=SHR,AMP=('BUFSP=65536')
//DUMPINB DD DSN=&SMFSDS,DISP=SHR,AMP=('BUFSP=65536')
//DUMPOUT DD DISP=(NEW,PASS),DSN=&RMF,SPACE=(CYL,(1,1))
//OUTDD DD DISP=(NEW,PASS),DSN=&OUTDD,
// SPACE=(CYL,(1,1)),DCB=(RECFM=VB,LRECL=12288)
//ADUPRINT DD SYSOUT=*
//SYSPRINT DD SYSOUT=*
//SYSIN DD *
//SYSIN DD *
INDD(DUMPINA,OPTIONS(DUMP))
INDD(DUMPINB,OPTIONS(DUMP))
OUTDD(DUMPOUT, TYPE(30,80,81,83))
START(1040)
END(2359)
DATE(2025229,2025360)
ABEND(NORETRY)
USER2(IRRADU00)
USER3(IRRADU86)

/*

Use it

//IBMJOBI  JOB 1,MSGCLASS=H RESTART=PRINT 
// JCLLIB ORDER=COLIN.JCL
// INCLUDE MEMBER=RACFSMF

//S1 EXEC PGM=ICETOOL,REGION=1024K
//DFSMSG DD SYSOUT=*
//TOOLMSG DD SYSOUT=*
//IN DD DISP=(SHR,PASS,DELETE),DSN=*.SMFDUMP.OUTDD
//JOBI DD DSN=&&TEMPJOBI,DISP=(NEW,PASS),SPACE=(CYL,(1,1))
//PJOBI DD SYSOUT=*
//TOOLIN DD *
COPY FROM(IN) TO(JOBI) USING(JOBI)
DISPLAY FROM(JOBI) LIST(PJOBI) -
...
//JOBICNTL DD *
INCLUDE COND=(5,8,CH,EQ,C'JOBINIT ')
//

The clever bits are the JCLLIB which gives the JCL library, and the INCLUDE MEMBER=RACFSMF which copies in the JCL.

To use the JOBI content, I needed to specify JOBI, PJOBI and JOBICTL, and similarly for each data component. 10 components meant 30 data sets – all with similar content and names, this lead to a mess of JCL.

Going further, I could use a template with the same data set names, (TEMP, PRINT etc) and just change the content.

I coverted the above JCL to create a member ICETOOL

//S1      EXEC  PGM=ICETOOL,REGION=1024K 
//DFSMSG DD SYSOUT=*
//TOOLMSG DD SYSOUT=*
//IN DD DISP=(SHR,PASS,DELETE),DSN=*.SMFDUMP.OUTDD
//TEMP DD DSN=&&TEMP3,DISP=(NEW,PASS),SPACE=(CYL,(1,1))
//PRINT DD SYSOUT=*

and use it with

//IBMJOBI  JOB 1,MSGCLASS=H RESTART=PRINT 
// JCLLIB ORDER=COLIN.JCL
// INCLUDE MEMBER=RACFSMF
//* INCLUDE MEMBER=PRINT
// INCLUDE MEMBER=ICETOOL

//TOOLIN DD *
COPY FROM(IN) TO(TEMP) USING(TEMP)
DISPLAY FROM(TEMP) LIST(PRINT) -

...
//TEMPCNTL DD *
INCLUDE COND=(5,8,CH,EQ,C'JOBINIT ')
//

Where I just had to change the data in italics – and not the boiler plate.

For each RACF record type, I had a different JCL member, based on the above file.
To select SMF records with a date and time range, I just edited member RACFSMF, and submitted the jobs, and they all used it.

This was easy to do and it let me focus on the problem – rather than on the JCL.