A short C quiz, and some gotcha’s

I’ve been looking at porting pymqi, the Python MQ interface to z/OS.

The biggest challenges where nothing to do with Pymqi.

So if you are bored after Christmas and want something stimulating… here are a few questions for you… The answers are below. I tried getting them displayed upside down, like all quality magazines; but that was too difficult.

Question 1. C question

I’ve reduced the problem I experienced,down to

int main() 
{ 
if ( 1==0 ) return 8; 
int rc; 
*=ERROR===========> CCN3275 Unexpected text 'int' encountered.
}                                                          

Hint: it works in a batch compile, using EDCCB

Question 2 binding in Unix Services

/bin/xlc a.o -L. -o b.so -Wl,INFO //’COLIN.MQ924.SCSQDEFS(CSQBRR2X)’ -Wl,dll c.x

Gave

FSUM3218 xlc: File //’COLIN.MQ924.SCSQDEFS(CSQBRR2X)’ contains an incorrect file suffix.

What do I need to do to fix it?

Question 3. Strange bind messages

Before I found the solution to problem number 2, I put the bind statements into a Unix Services file.

Using this gave me

IEW2326E 1221 THE FOLLOWING INVALID RECORD HAS BEEN SEEN:
=”lm-source” *

Copyright
IEW2326E 1221 THE FOLLOWING INVALID RECORD HAS BEEN SEEN:
IBM Corp. 2009, 2016 All Rights Reserved.

This bind statement was

cc -o mqsamp -W l,DYNAM=DLL,LP64 c.o mq.o

it worked without the mq.o

The mq.o file had

* <copyright                                                          * 
* notice="lm-source"                                                  * 
* (C) Copyright IBM Corp. 2009, 2016 All Rights Reserved.             * 
* </copyright>                                                        * 

Answer

  1. Using the cc compiler, it defaults to #pragma Langlvl(stdc89) which supports the c89 level of C. This says all variable declarations must come before any logic. This is relaxed in the c99 level, so specifying #pragma Langlvl(stdc99) cures it. You can also specify LANGLVL(EXTENDED) in the cc statement
  2. To include datasets in some of the binder options you need host file: filename with .OBJ suffix (object host file for the binder/IPA Link). When I used /bin/xlc a.o … -Wl,INFO //’COLIN.MQ924.SCSQDEFS.OBJ(CSQBRR2X)’ … it worked.
  3. The binder is not good at files in Unix Services, it likes records which are fixed block 80. The mq.o file had trailing blanks removed, and this confused it. I had to use a PDSE to get it to work.

Which came first, the chicken or the checksum.

The ability to sign Java jar files, or z/OS modules, has been around for many years. Using this the loader checks the digital signature in the object. This digital signature is a checksum of the object, and this checksum is encrypted and stored with the object. At load time, the loader calculates the checksum, and decrypts the checksum in the object and checks they match.

MQ now supports this for some of its objects; downloadable .zip, .tar and .gz files.

For some of these you need to download the public key to use. This raises the problem that an evil person may have taken the object, removed the official signing information, and added their own stuff. You then download their public certificate – see it works, it must be official.

To prevent this you can do the checksum on the public certificate, and make that available along with the official public key. (This is the chicken and egg problem. You need the certificate to be able to check the main code, and how to you check the certificate, without a certificate to check?)

On Linux you can calculate the checksum of a file using

sha256sum 9.2.4.0-IBM-MQ-Sigs-Certs.tar.gz

this gives 53c34cd374d7b08522423533ef2019b4aa0109a595fbaeab8ee6f927cb6c93ad, which is the same as the value on the IBM site. So this matches.

The IBM MQ code signatures page says IBM MQ public certificates, checksums, and .sig files are available from https://ibm.biz/mq92signatures. On this signatures page it says

release level: 9.2.4.0-IBM-MQ-Sigs-Certs 
Continuous Delivery: 9.2.4 IBM MQ file signatures, checksums and certificates

Platforms:  AIX 64-bit, pSeries, Linux 64-bit,x86_64, Linux 64-bit,zSeries, Linux PPC64LE, Windows 64-bit, x86, z/OS

This page is an httpS page, with the certificate issued by a proper Certificate Authority, and trusted third party. If you trust this CA, you can trust the IBM page.

When you click download, it downloads

  • 9.2.4.0-IBM-MQ-Sigs-Certs.tar.gz.sha256sum – this file content has 53c34cd374d7b08522423533ef2019b4aa0109a595fbaeab8ee6f927cb6c93ad
  • 9.2.4.0-IBM-MQ-Sigs-Certs.tar.gz

The value in the sha256sum file matches the value of the sha256sum 9.2.4.0-IBM-MQ-Sigs-Certs.tar.gz command.

As you can trust the security chain from the web page, through to the downloads, you can trust the .gz file.

Jar signing

Java has had the capability to sign a jar for at least 10 years.

The jarsigner command takes a jar file, a keystore with private key and calculates the checksum. It then encrypts it, and creates some files in the jar. For example

jarsigner -keystore trust.jks -storepass zpassword checkTLS.jar signer

This uses

  • the keystore called trust.jks,
  • with password zpassword
  • the checkTLS.jar file
  • and uses the certificate with alias name signer. This certificate must have extendedKeyUsage with codeSigning.

The jar file now has some additional files which can be seen using jar -tvf checkTLS.jar command.

  • META-INF/SIGNER.SF . This is the Signature File.
  • META-INF/SIGNER.EC .This is the public key to be used.

Where SIGNER is the name of the alias of the private key in the keystore, used to sign the jar file. The jar file can be signed many times by different private keys.

To verify the signature you can use

  • jarsigner -verify checkTLS.jar
  • jarsigner -verbose -certs -verify checkTLS.jar

The jarsigner -verbose -certs -verify checkTLS.jar gave me

- Signed by "CN=signer, O=cpwebuser, C=GB"
    Digest algorithm: SHA-256
    Signature algorithm: SHA256withECDSA, 256-bit key

jar verified.

Warning: 
This jar contains entries whose certificate chain is invalid. Reason: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
This jar contains signatures that do not include a timestamp. Without a timestamp, users may not be able to validate this jar after any of the signer certificates expire (as early as 2024-11-03).

The signer certificate will expire on 2024-11-03.

This shows that the jar file is consistent with the checksumming, but the certificate cannot be validated.

I can tell it which keystore to use to validate the certificate, using

jarsigner –keystore trust.jks -certs -verify checkTLS.jar

With the -verbose option you also get (with some of the output rearranged for clarity). The “s” or “sm” at the front of an object entry is s=signature verified, and m=entry listed in the manifest.

s = signature was verified 
m = entry is listed in manifest
k = at least one certificate was found in keystore
i = at least one certificate was found in identity scope


s 1402 Wed Dec 22 14:27:52 GMT 2021 META-INF/MANIFEST.MF

  >>> Signer
  X.509, CN=signer, O=cpwebuser, C=GB
  [certificate is valid from 22/12/21 14:51 to 30/01/25 16:46]
  X.509, CN=SSCA256, OU=CA, O=SSS, C=GB
  [certificate is valid from 04/11/21 15:48 to 03/11/24 15:48]

  [Invalid certificate chain: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target]

        1579 Wed Dec 22 16:12:04 GMT 2021 META-INF/SIGNER.SF
        1373 Wed Dec 22 16:12:04 GMT 2021 META-INF/SIGNER.EC

sm  54 Sat Jan 30 14:48:52 GMT 2021 checkTLS/checkTLS.manifest

  >>> Signer
  X.509, CN=signer, O=cpwebuser, C=GB
  [certificate is valid from 22/12/21 14:51 to 30/01/25 16:46]
  X.509, CN=SSCA256, OU=CA, O=SSS, C=GB
  [certificate is valid from 04/11/21 15:48 to 03/11/24 15:48]

  [Invalid certificate chain: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target]

When I downloaded the MQ 9.2.4 client and ran the jarsigner …. -verbose command the output included

sm 642 Thu Nov 04 16:01:46 GMT 2021 wlp/lib/extract/IFixUtils$ParsedIFix.class

[entry was signed on 04/11/21 17:58]
>>> Signer
X.509, CN=International Business Machines Corporation, OU=IBM CCSS, O=International Business Machines Corporation, L=Armonk, ST=New York, C=US
[certificate is valid from 25/08/21 01:00 to 26/08/23 00:59]
X.509, CN=DigiCert Trusted G4 Code Signing RSA4096 SHA384 2021 CA1, O="DigiCert, Inc.", C=US
[certificate is valid from 29/04/21 01:00 to 29/04/36 00:59]
      X.509, CN=DigiCert Trusted Root G4, OU=www.digicert.com, O=DigiCert Inc, C=US
[trusted certificate]
... 

This shows that the certificate used to sign the component of the jar file was signed by CN=International Business Machines Corporation, which was in turn signed by CN=DigiCert Trusted G4 Code Signing RSA4096 SHA384 2021 CA1. The jarsigner program was able to use its public certificate to validate the CA, and so validate the IBM certficiate, and so validate the checksum.

Rexx to C to Rexx sample code

I’ve put up on github some sample code to demonstrate how you can write a function in C, and invoke it from Rexx. I’ve provided some glue code as Rexx uses R0 and R1 to pass parameters, and C programs only use R1.

I’ve create some small functions to use in your C program which hide the Rexx logic. For example

rc = CRexxDrop(pEnv,”ZDROP”);
rc = CRexxGet(pEnv,”InSymbol”,&buffer[0],&lBuffer);
rc = CRexxPut(pEnv,”CPPUTVar,”Colinsv”,0);
Iterate through all symbols

If you have any comments or suggestions, please let me know.

Where’s my invisible code.

In trying to get system exits written in C to work. I found my code was not being executed, even when the first instructions were a deliberate program check. I tried using the tried and trusted program AMASPZAP (known as Super Zap) for displaying the internals of a load module and zapping it – but my code was not there! Where was it hiding? When I took a dump of the address space my code was in the dump. Why was it invisible and not being executed?

HSM archives on tape

Like taking 20 minutes to recall a long unused dataset from HSM (mounting a physical tape to retrieve the data set), I had this vague memory of doing a presentation on the binder and the structure of load modules. After a cup of tea and a chocolate biscuit to help the recall, I remembered about classes etc in a load module.

Classes etc

When I joined IBM over 40 years ago you wrote your program, and used the link editor to create the load module, a single blob of instructions and data.

Things have moved on. Think about a C program, in read only memory. When you issue a load to use it, you get a pointer to the read only (the re-entrant instructions and data), and your own copy of the “global variables” or Writeable Static Area (WSA). When using the C compiler, at bind time it includes a bit of code with 24 bit addressing mode. This means you have code which runs in 31/64 bit mode, and some code resident in 24 bit mode! It is no longer a single blob of instructions and data.

Within the load module there are different classes of data for example

  • C_CODE – C code
  • C_WSA – for a C program compiled with RENT option. This is the global data which each instance gets its own private copy of
  • B_TEXT code from the assembler
  • Using the HL Assembler, you can define your own classes using CATTR.

A class has attributes, such as

  • Length.
  • Should it be loaded or not. You could store documentation in the load module, which can be used by programs, but not needed at execution time.
  • RMODE.
  • It is reentrant or not.
  • Should this code be merged or replaced with similar code. For example the C Globals section would be merged. A block of instructions would be replace.

Segments

The binder can take things with similar attributes and store them together within a segment. You can have mixed classes eg B_TEXT and C_CODE, with the same RMODE attributes etc and have them in one segment. The C_WSA needs to be in a different segment because it has different attributes.

So where was my invisible code?

I needed to change my SPZAP job to tell it to dump out the C_CODE section. By default it dumps the B_TEXT sections. You can specify C_* or B_*. See the AMASPZAP documentation.

//STEP EXEC PGM=AMASPZAP
//SYSPRINT DD SYSOUT=A
//SYSLIB DD DISP=SHR,DSN=COLIN.C.LOAD
//SYSIN DD *
DUMPT COLIN CPPROGH C_CODE
/*

This dumps out (decoding the data into instructions) load module COLIN, CSECT CPPROGH and the C_CODE class.

Why wasn’t my code executing? The code to set up the C environment was not invoking my program because I had compiled it with the wrong options!

Writing system exits in C (and compiling them).

I wanted to call a C program from Rexx to do some special processing. The C programming guide gave me some hints, but I found it was a struggle to do it. It reminded me of when I was young and my father gave me a “beginners electronics kit” where had transistors, resisters, etc. You could build a “computer” that counted to 3, and make a radio. Unfortunately the instructions that came in it were in German, and for a different model kit to what I had. As a result it was very difficult to get working, but once you knew it was easy.

In the C programming guide there were instruction like “The CSECT must be the module entry point.” without saying which CSECT to use. They gave some sample programs, but not the JCL to compile them. After many failures, (looking at dumps and traces) I found you had to compile the C programs with “NORENT” which went against many years of experience.

I was using the System Programming C facility, which can be used, for example as z/OS exits. Note: This is different to Metal C, which allows you to include assembler code in your C program.

Some background

  • These programs do not have a main() but are invoked with a z/OS type parameter list.
  • They can use C facilities, such as printf, but not LE functions.
  • You cannot use the UNIX file system functions.
  • They need to be called with the C environment set up. You cannot just branch to the entry point.
  • You can have several functions in the same source file. You branch to the one of interest.

Simple case

My C program was

#pragma environment(CPPROGH)
#pragma csect (CODE, “OEMPUT”)
int CPPROGH(int * p, evalblock * pEval, char * env) {
….
return 0;
}

The pragma environment said set up the C environment before calling executing this function. It takes the standard z/OS parameter list.

I needed some glue code to take the parameters from Rexx and store them in a parameter list for the function.

This glue codes saves parameters from R0,and 16(r1) and 20 (r1), then executes the function.

ENVA RMODE ANY
ENVA AMODE 31 
ENVA  CSECT
  ...   
  L    R3,16(R1)  a(Parmlist) 
  ST   R3,Parmlist+0 
  L    R3,20(R1)  a(evalblk) 
  L    R3,0(R3) 
  ST   R3,Parmlist+4 
  ST   R0,PARMLIST+08  A(env block) 
  OI   PARMLIST+08,X'80' 
  la   r1,parmlist 
  L     R15,=V(CPPROGH) 
  BASR  R14,R15 

I wanted this to be called from REXX, which passes parameters in R0 and R1, so I had to write some glue code to store the parameters in storage before passing them to the program.

I compiled the glue code with

//GLUE EXEC PGM=ASMA90,PARM=’DECK,NOOBJECT,LIST,XREF(SHORT),NORENT’,
// REGION=0M
//SYSLIB DD DISP=SHR,DSN=CEE.SCEEMAC
// DD DSN=SYS1.MACLIB,DISP=SHR
//SYSUT1 DD UNIT=SYSDA,SPACE=(CYL,(1,1))
//SYSPUNCH DD DISP=SHR,DSN=COLIN.C.REXX.OBJ(GLUE2)
//SYSPRINT DD SYSOUT=*
//SYSIN DD DISP=SHR,DSN=COLIN.C.REXX(GLUE2)
//*

and compiled the C code with

//S1 JCLLIB ORDER=CBC.SCCNPRC
// SET LOADLIB=COLIN.C.REXX.LOAD
// SET LIBPRFX=CEE
//COMPILE EXEC PROC=EDCCB,
// LIBPRFX=&LIBPRFX,
// CPARM=’OPTFILE(DD:SYSOPTF),NORENT‘,
// BPARM=’SIZE=(900K,124K),RENT,LIST,XREF,RMODE=ANY,AMODE=31′
//COMPILE.SYSOPTF DD DISP=SHR,DSN=COLIN.C.REXX(CPARMS)
//COMPILE.SYSIN DD DISP=SHR,DSN=COLIN.C.REXX(CPPROGHE)
//BIND.SYSLMOD DD DISP=SHR,DSN=&LOADLIB.
//BIND.SYSLIB DD DISP=SHR,DSN=CEE.SCEESPC
// DD DISP=SHR,DSN=CEE.SCEELKED
//BIND.OBJLIB DD DISP=SHR,DSN=COLIN.C.REXX.OBJ
//BIND.SYSIN DD *
INCLUDE OBJLIB(GLUE2)
ENTRY ENVA
NAME COLIN(R)
/*

The EDCCB procedure to compile and bind, stores the object deck in a temporary file then passes this file and BIND.SYSIN into the binder.

C persistent environment.

The previous example created a C environment, ran my program, and deleted the C environment. If you want to do many calls to C functions you can set up a Persistent C environment. In this environment you do

  • From assembler, set up the environment
  • From assembler, use the environment, and call functions with your program as many times as you need
  • From assembler close down the environment,

This is well documented in the C programming guide, (but not how to compile it).

The essence of my program was

Set up the environment

L R15,=V(EDCXHOTL)
BASR R14,R15

Call my function

   LA R4,HANDLE 
   LA R5,USEFN  This has the  
   STM  R4,R5,PARMLIST 
* now the user paramaters
  ...
   OI   PARMLIST+16,X'80' 
   LA   R1,PARMLIST 
   L    R15,=V(EDCXHOTU) 
   BASR R14,R15 
...
USEFN    DC V(CPPROGH) <<  This function name

Clean up

    LA R1,PARMLIST 
    OI 0(R1),X'80' 
    L R15,=V(EDCXHOTT) 
    BASR R14,R15 

My C program was

#pragma linkage(CPPROGH,OS)
int CPPROGH(int * p, evalblock * pEval, char * env) {
printf(“in CPPROG\n”);
return 0}

In this case the pragma is LINKAGE(CPPROGH,OS). The previous, self contained code, had ENVIRONMENT(CPPROGH). You need to use the right one.

Which procedure do I use to compile?

The C books describe the various procedures, for example EDCCB for compile and BIND, and EDCCL for compile and LINKEDIT. They do the same thing. The LINKEDIT uses program HEWL to link edit. The BIND uses IEWL to invoke the binder. These are both aliases to the binder IEWBLINK.

What’s the difference between BALR and BASR?

When coding, my fingers automatically used BALR (Branch and Link Register). This worked fine, but I should have used BASR (Branch and Save Register). As the Principles of Operation (POP) says

It is recommended, however, that BRANCH AND SAVE (BAS and BASR) be used instead and that BRANCH AND LINK be avoided since it places nonzero information in bit positions 32-39 of the general register in the 24-bit addressing mode, which may lead to problems and may decrease performance.

In 31 bit mode with BALR 14,15, the return address is stored in register 14. ‘1’ followed by the 31 it address.

In 24 bit mode, the return address has other information at the top, including the condition code. Most of the time this information will be ignored.

So using BALR is not wrong, it is that BASR is better.

Using R_PKISERV PKI server Callable service.

I tried to use PKI Services to generate a certificate so I could do OCSP verification. I tried using the R_PKIServ Security Service Callable API. This ultimately failed because key generation with PKI Server is not supported on my zPDT system running z/OS on my Linux system. Below are some of the things I learned about using this interface.

Most of the documentation is there and complete, it assumes you are an expert in this area, so it is a bit tough when you are new to it.

I found there are two modes of operation, (this was not clear)

  1. one is the SAF interface, and is an API for issuing the RACDCERT requests – read up on the RACDCERT GENCERT(request-dataset-name) command,
  2. The other is to use the PKI server, and to store stuff in ICSF,and not use RACF.

My zPDT system does not support PKI to generate certificates, so I cannot comment on that.

The SAF/PKI mode of operation is determined by the SIGNWITH option.

  • SIGNWITH PKI: says use PKI,
  • SIGNWITH SAF:CERTAUTH/COLIN-CA says use SAF, and the specified CA certificate.

Options for Gencert

Table 2. CertPlist for GENCERT and REQCERT defines all the options for GENCERT. Many of them apply only to PKI. (The fields have “Only valid with PKI Services requests” in the field description.) Some parameters are used to defined the parameters of a certificate, other provide information about the certificate.

For SAF, these fields provide “other information”

  • DiagInfo – this is very helpful for diagnosing problems, it gives the name of the field causing problems, see below.
  • SignWith – this defines whether SAF or PKI is used. If SAF, this is the CA certificate.
  • Userid – which ID will own the certificate
  • Label – this is the name the certificate to be stored in the RACF database.

These fields provide information for the certificate

  • CommonName
  • PublicCert – this is a Base 64 encoded certificate request you want to sign and store in RACF
  • Title
  • OrgUnit (OU)
  • Org
  • Locality
  • StateProv
  • Country
  • KeyUsage – some values are valid with SAF
  • NotBefore
  • NotAfter
  • AltIPAddr
  • AltURI

It does not matter the order you specify these components. The CN that was generated came out as

CN=Colin.T=COLINTITLE.OU=OUSSS.O=SSS.C=GB

exactly the same as if you issued the RACDCERT GENCERT command.

Diagnostic information

You have to provide a field called DiagInfo. This has some very good diagnostic information, especially when you get a return code saying “one of your parameters is not supported”. For example I got

safrc 8 racfrc 8 racfrs 52, where 52 means Incorrect field value specified in CertPlist.

The DiagInfo field layout is

  • “DiagInfo ” eye catcher
  • an integer length of the following field
  • the additional information, in my case it was “SignWith”. I had specified SignWith:PKI which was not supported.

Once the field had

“Label” specified is already in use (IRRD111I)

so you can sometimes get the RACF (RACDCERT) error message as well.

SAF interface and Public Cert

You can use this interface with a certificate request.

My certificate request was in a file with a format like

—–BEGIN CERTIFICATE REQUEST—–
MII…


C/l/hL4HV/iU2iX8EFr3BPlA2A==
—–END CERTIFICATE REQUEST—–

I read in the data between the Begin certificate request and the End certificate request, and passed this in as the PublicCert.

Using PKI Server with the HTTPD web interface.

This post follows on from configuring PKI Server, and explains how to configure the HTTPD server, explains how to use it, and gives some hints on debugging it when it goes wrong.

Having tried to get this working (and fixing the odd bug) I feel that this area is not as well designed as it could have been, and I could not get parts of it to work.

For example

  • You cannot generate browser based certificate request because the <keygen> html tag was removed around 2017, and the web page fails. See here. You can use 1-Year PKI Generated Key Certificate instead, so not a big problem now we know.
  • The TLS cipher specs did not have the cipher specs I was using.
  • I was expecting a simple URL like https://10.1.1.2/PKIServer/Admin. You have to use https://10.1.1.2/PKIServ/ssl-cgi-bin/camain.rexx, which exposes the structure of the files. You can go directly go to the Admin URL using https://10.1.1.2/PKIServ/ssl-cgi-bin/auth/admmain.rexx, which is not very elegant.
  • For an end user to request a certificate you have to use https://10.1.1.2/Customers/ssl-cgi-bin/camain.rexx.
  • There seem to be few security checks.
    • I managed to get into the administrative panels and display information using a certificate mapping to a z/OS userid, and with no authority!
    • There are no authority checks for people requesting a certificate. This may not be an exposure as the person giving the OK should be checking the request.
    • There were no security checks for administration functions. (It is easy to add them(
  • You can configure HTTPD to use certificates for authentication and fall back to userid and password.
  • There is no FallbackResource specified. This is a default page which is displayed if you get the URL wrong.
  • The web pages are generated dynamically. These feel over engineered. There was a problem with one of the supplied pages, but after some time trying to resolve the problem, I gave up.

I’ll discuss how to use the web interface, then I’ll cover the improvements I made to make the HTTP configuration files meet my requirements, and give some guidance on debugging.

You may want to use a HTTPD server just for PKI Server, or if you want to share, then I suggest you allocate a TLS port just for PKI Server.

URL

The URL looks like

https://10.1.1.2:443/PKIServ/ssl-cgi-bin/camain.rexx

where (see Overview of port usage below for more explanation)

  • 10.1.1.2 is the address of my server
  • port 443 is for TLS with userid and password authentication
  • PKIServ is the part of the configuration. If you have multiple CA’s this will be CA dependant.
  • ssl-cgi-bin is the “directory” where …
  • camain.rexx the Rexx program that does the work.

With https:10.1.1.2:443/Customers/ssl-cgi-bin/camain.rexx this uses the same camain.rexx as for PKIServ, but in the template for displaying data, it uses a section with the same name (Customers) as the URL.

Overview of port usage

There are three default ports set up in the HTTPD server for PKI Server. I found the port set-up confusing, and not well document. I’ve learned (by trial and error) that

  • port 80 (the default for non https requests) for unauthenticated requests, with no TLS session protection. All data flows as clear text. You many not want to use port 80.
  • port 443 (the default for https requests) for authentication with userid and password, with TLS session protection
  • port 1443 for certificate authentication, with TLS Session protection. Using https://10.1.1.2:443/PKIServ/clientauth-cgi/auth/admmain.rexx, internally this gets mapped to https://10.1.1.2:1443/PKIServ/clientauth-cgi-bin/auth/admmain.rexx. I cannot see the need for this port and its configuration.

and for the default configuration

  • port:/PKIServ/xxx is for administrators
  • port:/Customers/xxx is for an end user.

and xxx is

  • clientauth-cgi. This uses TLS for session encryption. Port 1443 runs with user SAFRunAs PKISERV. All updates are done using the PKISERVD userid, this means you do not need to set up the admin authority for each userid. There is no security checking enabled. I was able to process certificates from a userid with no authority!
  • ssl-cgi-bin. This uses port TLS and 443. I had to change the file to be SAFRunAs %%CERTIF%% as $$CLIENT$$ is invalid. You have to give each administrator ID access to the appropriate security profiles.
  • public-cgi. This is used by some insecure requests, such as print a certificate.

I think the only one you should use is ssl-cgi-bin.

Accessing the services

You can start using

These both give a page with

  • Administration Page. This may prompt for your userid and password, and gives you a page
  • Customer’s Home Page. This gives a page https://10.1.1.2/Customers/ssl-cgi-bin/camain.rexx? called PKI Services Certificate Generation Application. This has functions like
    • Request a new certificate using a model
    • Pickup a previously requested certificate
    • Renew or revoke a previously issued browser certificate

Note: You cannot use https://10.1.1.2:1443/PKIServ/ssl-cgi-bin/camain.rexx, as 1443 is not configured for this. I could access the admin panel directly using https://10.1.1.2:1443/PKIServ/ssl-cgi-bin/auth/admmain.rexx

I changed the 443 definition to support client and password authentication by using

  • SSLClientAuth Optional . This will cause the end user to use a certificate if one is available.
  • SAFRunAs %%CERTIF%% . This says use the Certificate authentication when available, if not prompt for userid and password.

Certificate requests

I was able to use the admin interface and display all certificate requests.

Request a new certificate using a model.

I tried to use the model “1 Year PKI SSL Browser Certificate“. This asks the browser to generate a private/public key (rather than the PKIServer generating them). This had a few problems. Within the page is a <KEYGEN> tag which is not supported in most browsers. It gave me

  • The field “Select a key size” does not have anything to select, or type.
  • Clicking submit request gave me IKYI003I PKI Services CGI error in careq.rexx: PublicKey is a required field. Please use back button to try again or report the problem to admin person to

I was able to use a “1 Year PKI Generated Key Certificate

The values PKIServ and Customer are hard-coded within some of the files.

If you want to use more than one CA, read z/OS PKI Services: Quick Set-up for Multiple CAs. Use this book if you want to change “PKIServ” and “Customer”.

Colin’s HTTPD configuration files.

Because I had problems with getting the supplied files to work, I found it easier to restructure, parameterise and extend the provided files.

I’ve put these files up to github.

Basic restructure

I restructured and parametrised the files. The new files are

  • pki.conf. You edit this to define your variables.
  • 80.conf contains the definitions for a general end user, not using TLS. So the session is not encrypted. Not recommended.
  • 443.conf the definitions for the TLS port. You should not need to edit this while you are getting started. If you want to use multiple Certificate Authorities, then you need to duplicate some sections, and add definitions to the pki.conf file. See here.
  • 1443.conf the definitions for the TLS port for the client-auth path. You should not need to edit this while you are getting started. If you want to use multiple Certificate Authorities, then you need to duplicate some sections, and add definitions to the pki.conf file. See here.
  • Include conf/pkisetenv.conf to set some environment variables.
  • pkissl.conf. The SSL definitions have been moved to this file, and it has an updated list of cipher specs.

The top level configuration file pki.conf

The top level file is pki.conf. It has several sections

system wide

# define system wide stuff
# define my host name

Define sdn 10.1.1.2
Define PKIAppRoot /usr/lpp/pkiserv
Define PKIKeyRing START1/MQRING
Define PKILOG “/u/mqweb3/conf”

# The following is the default
#Define PKISAFAPPL “OMVSAPPL”
Define PKISAFAPPL “ZZZ”
Define serverCert “SERVEREC”
Define pkidir “/usr/lpp/pkiserv”

#the format of the trace entry
Define elf “[%{u}t] %E: %M”

Defined the CA specific stuff

# This defines the path of PKIServ or Customers as part of the URL
# This is used in a regular expression to map URLs to executables.
Define CA1 PKIServ|Customers
Define CA1PATH “_PKISERV_CONFIG_PATH_PKIServ /etc/pkiserv”

#Define the port for TLS
Define CA1Port 443

# specify the groups which can use the admin facility
Define CA1AdminAuth ” Require saf-group SYS1 “

other stuff

LogLevel debug
ErrorLog “${PKILOG}/zzzz.log”
ErrorLogFormat “${elf}”
# uncomment these if you want the traces
#Define _PKISERV_CMP_TRACE 0xff
#Define _PKISERV_CMP_TRACE_FILE /tmp/pkicmp.%.trc
#Define _PKISERV_EST_TRACE 0xff
#Define _PKISERV_EST_TRACE_FILE /tmp/pkiest.%.trc

#Include the files
Include conf/80.conf
Include conf/1443.conf
Include conf/443.conf

The TLS configuration file

The file 443.conf has several parts. It uses the parametrised values above, for example ${pkidir} is substituted with /usr/lpp/pkiserv/. When getting started you should not need to edit this file.

Listen ${CA1Port}
<VirtualHost *:${CA1Port}>

#define the log file for this port
ErrorLog “${PKILOG}/z${CA1Port}.log


DocumentRoot “${pkidr}”
LogLevel Warn
ErrorLogFormat “${elf}”

Include conf/pkisetenv.conf
Include conf/pkissl.conf
KeyFile /saf ${PKIKeyRing}
SSLClientAuth Optional
#SSLClientAuth None

RewriteEngine On

# display a default page if there are problems
# I created it in ${PKIAppRoot}/PKIServ,
# (/usr/lpp/pkiserv/PKIServ/index.html)
FallbackResource “index.html”

Below the definitions for one CA are defined. If you want a second CA, then duplicate the definitions,and change CA1 to CA2.

Notes on following section.

# Start of definitions for a CA

<IfDefine CA1>
SetEnv ${CA1PATH}
RewriteRule ¬/(${CA1})/ssl-cgi/(.) https://${sdn}/$1/ssl-cgi-bin/$2 [R,NE]

RewriteRule ¬/(${CA1})/clientauth-cgi/(.) https://${sdn}:1443/$1/clientauth-cgi-bin/$2 [R,NE,L]
ScriptAliasMatch ¬/(${CA1})/adm(.).rexx(.) “${PKIAppRoot}/PKIServ/ssl-cgi-bin/auth/adm$2.rexx$3
ScriptAliasMatch ¬/(${CA1})/Admin “${PKIAppRoot}/PKIServ/ssl-cgi-bin/auth/admmain.rexx”
ScriptAliasMatch ¬/(${CA1})/EU “${PKIAppRoot}/PKIServ/ssl-cgi-bin/camain.rexx”
ScriptAliasMatch ¬/(${CA1})/(public-cgi|ssl-cgi-bin)/(.*) “${PKIAppRoot}/PKIServ/$2/$3”
<LocationMatch “¬/(${CA1})/clientauth-cgi-bin/auth/pkicmp”>
CharsetOptions NoTranslateRequestBodies
</LocationMatch>
<LocationMatch “¬/(${CA1})/ssl-cgi-bin(/(auth|surrogateauth))?/cagetcert.rexx”>
Charsetoptions TranslateAllMimeTypes
</LocationMatch>
<IfDefine>

#End of definitions for CA1

Grouping the statements for a CA in one place means it is very easy to change it to use multiple CA’s, just repeat the section between <IfDefine…> and</IfDefine> and change CA1 to CA2.

The third part has definitions for controlling access to a directory. I added more some security information, and changed $$CLIENT$$ to %%CLIENT%%. This is a subset of the file, for illustration

# The User will be prompted to enter a RACF User ID
#and password and will use the same RACF User ID
# and password to access files in this directory
<Directory ${PKIAppRoot}/PKIServ/ssl-cgi-bin/auth>
AuthName AuthenticatedUser
AuthType Basic
AuthBasicProvider saf
Require valid-user

#Users must have access to the SAF APPLID to work
# ZZZ in my case
# it defaults to OMVSAPPL
<IfDefine PKISAFAPPL>
SAFAPPLID ${PKISAFAPPL}
</IfDefine>

# IBM Provided has $$CLIENT$$ where it should have %%CLIENT%%
# SAFRunAs $$CLIENT$$
# The following says use certificate if available else prompt for
# userid and password
SAFRunAs %%CERTIF%%
</Directory>…

Debugging hints and tips

I spent a lot of time investigating problems, and getting the definitions right.

Whenever I made a change, I used

s COLWEB,action=’restart’

to cause the running instance of HTTPD server to stop and restart. Any errors in the configuration are reported in the job which has the action=’restart’. It is easy to overlook configuration problems, and then spend time wondering why your change has not been picked up.

I edited the envvars file, and added code to rename and delete logs. For example rm z443.log.save, and mv z443.log z443.log.save .

I found it useful to have

<VirtualHost *:443>
DocumentRoot “${pkidr}”
ErrorLog “${PKILOG}/z443.log
ErrorLogFormat “${elf}”
LogLevel Warn


Where

  • Error logs is where the logs for this virtual host (port 443) are stored. I like to have one per port.
  • The format is defined in the variable Define elf “[%{c}t] %E: %M” in the pki.conf file. The c is compact time (2021-11-27 17:19:09). If you use %{cu}t you also get microseconds. I could not find where you just get the time, and no date.
  • LogLevel Warn. When trying to debug the RewriteRule and ScriptAlias I used LogLevel trace6. I also used LogLevel Debug authz_core_module:Trace6 which sets the default to Debug, but the authorization checking to Trace6.

With LogLevel Debug, I got a lot of good TLS diagnostics

Validating ciphers for server: S0W1, port: 443
No ciphers enabled for SSLV2
SSL0320I: Using SSLv3,TLSv1.0,TLSv1.1,TLSv1.2,TLSv1.3 Cipher: TLS_RSA_WITH_AES_128_GCM_SHA256(9C)

TLSv10 disabled, not setting ciphers
TLSv11 disabled, not setting ciphers
TLSv13 disabled, not setting ciphers
env_init entry (generation 2)
VirtualHost S0W1:443 is the default and only vhost

Then for each web session

Cert Body Len: 872
Serial Number: 02:63
Distinguished name CN=secp256r1,O=cpwebuser,C=GB
Country: GB
Organization: cpwebuser
Common Name: secp256r1
Issuer’s Distinguished Name: CN=SSCA256,OU=CA,O=SSS,C=GB
Issuer’s Country: GB
Issuer’s Organization: SSS
Issuer’s Organization Unit: CA
Issuer’s Common Name: SSCA256
[500865c0f0] SSL2002I: Session ID: A…AAE= (new)
[500865c0f0] [33620012] Peer certificate: DN [CN=secp256r1,O=cpwebuser,C=GB], SN [02:63], Issuer [CN=SSCA256,OU=CA,O=SSS,C=GB]

With LogLevel Trace6 I got information about the RewriteRule, for example we can see /Customers/EU was mapped to /usr/lpp/pkiserv/PKIServ/ssl-cgi-bin/camain.rexx

applying pattern ‘¬/(PKIServ|Customers)/clientauth-cgi/(.*)’ to uri ‘/Customers/EU’

AH01626: authorization result of Require all granted: granted
AH01626: authorization result of : granted

should_translate_request: r->handler=cgi-script r->uri=/Customers/EU r->filename=/usr/lpp/pkiserv/PKIServ/ssl-cgi-bin/camain.rexx dcpath=/

uri: /Customers/EU file: /usr/lpp/pkiserv/PKIServ/ssl-cgi-bin/camain.rexx method: 0 imt: (unknown) flags: 00 IBM-1047->ISO8859-1

# and the output

Headers from script ‘camain.rexx’:
Status: 200 OK
Status line from script ‘camain.rexx’: 200 OK
Content-Type: text/html
X-Frame-Options: SAMEORIGIN
Cache-Control: no-store, no-cache

How difficult can it be to use BPXBATCH? It is harder and more interesting than you may think.

I just wanted to add a parameter to the started procedure for a web server. It took me over three days with lots of help to achieve it. I found that many things I thought I knew – turned out to be wrong!

BPXBatch is a program that allows you to run shell programs (and proper programs) that run in Unix Services on z/OS. As every one knows you can pass parameters using

  • exec pgm=bpxbatch,parm=’…’
  • exec pgm=bpxbatch,parmdd=pardmdd and having //pardmdd dd…
  • //stdparm dd …

The challenge I had was to use this in a started task. I had

//COLWEB PROC AA=’ABCD’,BB=’EFGH’
//IHS EXEC PGM=BPXBATCH,REGION=0M,
// PARM=’SH longname.sh -aa &ABCD -bb &BB’
//STDOUT DD SYSOUT=H
//STDERR DD SYSOUT=H
//STDENV DD …

and wanted to add some debug information to the command. This would make the parm=’…’ longer than 100 characters and so parm=’…’ cannot be used. I converting it to

//COLWEB PROC AA=’ABCD’,BB=’EFGH’
//IHS EXEC PGM=BPXBATCH,REGION=0M,PARMDD=PARMDD
//PARMDD DD * SYMBOLS=JCLONLY
SH longname.sh xx -aa &ABCD -bb &BB -debug longstring
/*
//STDOUT DD SYSOUT=H
//STDERR DD SYSOUT=H
//STDENV DD …

did not work – the parameters were not passed through.

If I used

//COLWEB PROC AA=’ABCD’,BB=’EFGH’
// EXPORT SYMLIST=*
// SET T1=’ABCD’
// SET T2=’EFGH’

//IHS EXEC PGM=BPXBATCH,REGION=0M,PARMDD=PARMDD
//PARMDD DD * SYMBOLS=JCLONLY
SH longname.sh xx -aa &T1 -bb &T2 -debug longstring
/*
/*

This worked ok. I believe the reason why &AA and &BB cannot be used is that symbols have to be exported before they are defined, for them to be used in DD * statements. As you cannot put an “EXPORT” before the procedure, they cannot be exported.

A solution is

//COLWEB PROC AA=’ABCD’,BB=’EFGH’
// EXPORT SYMLIST=*
// SET SAA=&AA
// SET SBB=&BB

//IHS EXEC PGM=BPXBATCH,REGION=0M,PARMDD=PARMDD
//PARMDD DD * SYMBOLS=JCLONLY
SH longname.sh xx -aa &SAA -bb &SBB -debug longstring
/*
/*

This works fine; the command was

SH longname.sh xx -aa ABCD -bb EFGH -debug longstring

It now starts to get interesting

Originally I used

// SET SAA=’&AA’

Having learned the hard way to put quotes around things.

The statement was then

SH longname.sh xx -aa &AA -bb &BB -debug longstring

When this executed the parameters were

xx -aa

This was shock – where was the rest of the data?

The SH… basically invokes a shell command, and although the documentation does not describe it, you can do things like

SH set -x;longname.sh xx -aa &AA -bb &BB -debug longstring

Which sets the shell trace flag. Executing this prints out a trace of all of the shell program statements executed.

As this is a shell command, the “&” signifies that the command is to run asynchronously (in the background. (This is well known in the Unix world, see the sh command syntax) The string passed to the command is up to the &.

If you want to pass a “&” you need \&, and so the string should really be specified as

SH set -x;longname.sh xx -aa \&AA -bb \&BB -debug longstring

Lessons learned

  • If you want to pass started task parameters through to a dataset, you need to use intermediate set symbols
  • Do not put quotes around the started task parameters.

Thanks

I’d like to thank Paul Gilmartin who gave me a lot of help in the shell side of thing.

Setting up the PKI server on z/OS.

The PKI server provides a certificate management package on z/OS. It provides a web interface for requesting and processing certificates, and updates LDAP if the certificates are revoked. I feel that there should be a command interface; but you can write your own using callable services.

I wanted to try this to generate certificates I could use with MQ, and check out the OCSP certificate validation in MQ.

Ultimately I was not able to get this working, as PKI depends on ICSF, which depends on some encryption technology which is not available on my zPDT system running z/OS on Ubuntu Linux. There were also bugs in the Web server files which initially stopped me from generating certificate requests.

I hope my experiences of my journey can help others who are trying to install it.

I’ve documented PKI and HTTPD server here.

Overall it took me a couple of days to get the PKI server up and working. If I had blindly used the product defaults – it might have been quicker, but more dangerous.

The documentation is more of a configure everything, then try to start it. I prefer baby steps, where you start small, get the smallest system working, then add more function to it. For example there are 15 pages of parameters for pkiserv.conf. I would rather be given a file of parameters you must have, and you gradually extend it.

You can have multiple PKI instances, for example if you have different CA authorities. There is a red book on this. I recommend you get the simple environment working first, then create the multi CA environment. This may mean you throw away the first configuration, but you will have had valuable experience of setting it up.

The PKI Server is documented in Cryptographic Services PKI Services Guide and Reference (SA23-2286-50)

Overview

  • The PKI server is an application that runs as a started task.
  • For the end user, you can have
    • Apache HTTPD web server and Rexx execs,
    • or WAS Liberty with Java Server Pages.
    • or full function WAS.
  • It stores information about certificates in VSAM files.
  • It needs a Certificate Authority certificate, and a Registration Authority certificate. A registration authority (RA) is responsible for accepting requests for digital certificates and authenticating the entity making the request.
  • The server needs to be able to issue commands as a surrogate – on behalf of other users.
  • The PKI server stores information in LDAP.

Before I started I set up an LDAP server, and the HTTPD server, as it takes some time to set these up and get working (baby steps).

Setting up the RACF environment.

The IBM documentation is here.

There is a set-up script which can execute the RACF commands you need, or you can have the script display the commands (and not execute them).

I had problems with the definitions it was creating, so I took the list of commands, and modified them before executing them. It feels that you must customise the script. I think it look longer to change the script, run it, change it etc until it all worked, than it would if I had edited the JCL with the statements embedded.

The setup script does the following

  • Creates some system wide profiles, some of which you may already have defined. Example profiles:
    • RDEFINE FACILITY BPX.SERVER
    • Enables Enhanced Generic Naming (EGN) which allows you to specify the generic character ** in datasets. This is most likely to be enabled anyway, but I did not want to enable this without proper consideration.
    • Activates generic profile checking for CSFKEYS CSFSERV etc.
    • Activates class CSFSERV and RACFLISTs it
  • Creates a userid, and group
    • You can specify a OMVS UID and GID, (or let them default). I changed it to use OMVS AUTOUID and AUTOGID.
    • It sets up a dataset profile ADDSD ‘PKISRVD.**’ and gives the started task userid, and the PKI admin group access to this.
    • Gives the started task userid access to IRR.DIGTCERT.LISTRING to be able to keyrings. I use the more specific RDATALIB, and give access to individual keyrings, rather than the more general IRR.DIGTCERT…. facility.
  • Sets up the certificates and keyrings
    • I prefer to use Elliptic Curve keys, rather than the default RSA. You can specify an option in IKYSETUP to pick which option(s) you want.
    • I had an existing CA certificate I wanted to use. It had been distributed to my whole enterprise (my laptop). You can set an option to not generate a certificate.
    • It has a naming scheme like SUBJECTSDN(OU(‘SSS’) O(‘Your Company’) C(‘GB’)), which does not match mine. My CA is CN=SSSCA,OU=CA,O=SSS, without the Country specification. You can change the Rexx exec to whatever you want. I would rather change the raw RACDCERT definitions, than change the Rexx, and rerun it (and keep rerunning it till it worked).

The configuration script IKYSETUP, has nearly 2000 lines, and you have to carefully read 1000 lines, and change some (perhaps 50) lines of Rexx (and fix them when you get them wrong).

When I ran it, I experienced problems like

  • trying to allocate a data set IKYSETUP.LOG failed, because IKYSETUP is not a valid userid on my system. I had to put trace statements into the rexx to find out the problem. I edited the log_dsn=… statement to an acceptable name.
  • It tried to allocate a log with the ca_domain as the HLQ. I set ca_domain = “” to prevent this.
  • Rerunning the command did not always work, for example after I changed an adduser command, the second time the command failed because the userid already existed. I had to add a “delete user” command to get it to work.
  • The userid running the script was not put into the PKIGRP group.
  • The PKIGRP needs ALTER access to ‘PKISRVD.**’ – not just CONTROL.
  • It uses RALTER PROGRAM * when RALTER PROGRAM ** is better. (if you use RLIST PROGRAM * – you get all definitions. If you use RLIST PROGRAM ** you get just those with **)

The RACF statements were written like

  • Define a profile
  • Give the ID access to the profile.

I think it is better to split these especially when there is a system wide resource. You create it in one file, and give access in other files.

If you add a user, then you may want to do a delete user before the add user command (or to do a list user followed by a delete user – in case you get it wrong).

Where the script has

RDEFINE FACILITY IRR.DIGTCERT.LISTRING

You do not want to just delete this, as the profile may be used by other applications.

I’ve been managing my certificates by RDATALIB rather than FACILITY, so I had to create my own definitions in a file. I was much more comfortable using the PDS members with the definitions in them.

I’ve put the files up on github.

  • Review and run the SYSTEM files.
  • Review and run the USER files
  • Review and run the INSTANCE file.
  • Connect your userid to the PKIGRP group. CONNECT IBMUSER GROUP(PKIGRP)
  • To backup the certificate the userid issuing the command needs ALTER access to ‘PKISRVD.**’ The group PKIGRP is only given CONTROL.
    • PERMIT ‘PKISRVD.**’ ID(PKIGRP ) ACCESS(ALTER )
  • Optional Review the UNIX file which does RALTER PROGRAM **… for the CSF libraries (and others if needed).

My process of defining the server

I suggest you do the RACF configuration first, so you set up the High Level Qualifier, and RACF profile before you create the VSAM data sets, because you do not usually want to have the VSAM data sets cataloged in the master catalog.

You should use an existing HLQ, or define an alias for PKISRVD to point to an existing user catalog. See create an ALIAS, and create a user catalog if you do not have a user catalog.

If you have to move systems (for example upgrading your zPDT system) you just need to import the catalog, and rerun the define alias command, and all the datasets will be available on your new system (rather than have to re-catalog the individual data sets).

Create the VSAM files

This was easy,

  • copy SYS1.SAMPLIB(IKYCVSAM) to your PDSE.
  • change the job card
  • change VOL(vvvvvv) to VOL(USER00)
  • If you want to change the HLQ, change PKISRVD to the new HLQ

If you have to rerun the job, it deletes the datasets before recreating them (great!).

You might need to talk to your Storage Administrator about any other parameters you need, for example how often the data sets should be backed up, or migrated. The Storage Administrator may need to change the SMS profiles for the High Level Qualifier.

For production make sure these data sets are backed up regularly and taken off-site.

There is a lot of good information in the documentation on this.

You can display the contents of the VSAM datasets using the iclview shell command. I had to set up a shell script with

export PATH=/usr/lpp/pkiserv/bin/
export LIBPATH=/usr/lpp/pkiserv/lib
export NLSPATH=/usr/lpp/pkiserv/lib//usr/lpp/nls/msg/%L/%N
/usr/lpp/pkiserv/bin/iclview -d \’PKISRVD.VSAM.ICL\’

You need to escape the data set name.

During first set up (where I changed the CA I wanted to use) I got

Error 76677164 initializing ICL: The CA certificate in the ICL does not match the one in the keyring

I had to recreate the VSAM datasets.

Create and configure the PKISERVD configuration file

This is documented here.

Check to see if the runtime instance directory exists, and if not, create it. You need one for each PKI Server. The documentation recommends /etc/pkiservd, but you can use another one.

ls -ltr /etc/pkiservd
mkdir /etc/pkiservd

Copy files from the supplied sample.

cp -r /usr/lpp/pkiserv/samples/* . /etc/pkiservd/

Edit pkiserv.conf

You need to change the LDAP information.

[LDAP]
NumServers=1
PostInterval=5m
Server1=127.0.0.1:389
AuthName1=cn=ibmuser, o=Your Company
AuthPwd1=password

Later you can change the file, and use a LDAPBIND profile, and remove the need to have the password stored in clear text.

Find KeyRing= and change (if necessary) the keyring value matches the value or the ring you created. ISPF may have upper cased it.

Check RALabel= for the one you created.

Edit pkiserv.envars and change _PKISERV_CONFIG_PATH= to the instance path.

Review the started task JCL PKISERVD

I changed the time zone. The JCL looked like

//PKISERVD PROC REGSIZE=256M,
// OUTCLASS='H',
// TZ='gmt0',
// FN='pkiserv.envars',
// DIR='/etc/pkiserv'
// STDO='1>DD:STDOUT',
// STDE='2>DD:STDERR'

Start it using S PKISERVD, and resolve any problems. You can stop it using P PKISERVD.

My favourite IDCAMS commands.

As part of my working with ADCD and having to do every system programmer task myself, it is easy to get into trouble by having all data sets cataloged in the master catalog. When you come to move to a newer level of ADCD, all of your datasets are cataloged in the old catalog.

It is better to create a user catalog for dataset you create, and create an alias, so the datasets are cataloged in your user catalog. You just have to import the catalog when you move to the newer level, and recreate the aliases and your data sets will all be visible.

I tend to use JCL like

//IBMDF JOB 1,MSGCLASS=H
// EXPORT SYMLIST=*
// SET NAME=PKISRVD
//* CHANGE THE NAME IN THE RELATE TO YOUR CATALOG
//S1 EXEC PGM=IDCAMS
//SYSPRINT DD SYSOUT=*,SYMBOLS=JCLONLY
LISTCAT ENTRIES(‘&NAME’) ALL
/*

and change the value in the SET statement.

Define a user catalog

DEFINE USERCATALOG –
( NAME(‘&NAME’) –
MEGABYTES(15 15) –
VOLUME(&VOL) –
ICFCATALOG –
FREESPACE(10 10) –
STRNO(3 ) ) –
DATA( CONTROLINTERVALSIZE(4096) –
BUFND(4) ) –
INDEX(BUFNI(4) )

List a user catalog

LISTCAT ENTRIES(‘&NAME’) ALL

Delete a user catalog

DELETE –
&NAME –
PURGE –
USERCATALOG

Create an alias to map a HLQ to a user catalog

DEFINE ALIAS (NAME(ZZZZ) RELATE(‘A4USR1.ICFCAT’) )

Delete an alias

DELETE ‘ZZZZ’ ALIAS

List alias to catalog reference

LISTCAT ENTRIES(‘ZZZZ’ ) ALL

This just tells you the alias exists and which user catalog it uses.

List data sets under an alias

LISTCAT ENT(CEE.*) ALL

gives

LISTCAT ENTRIES(CEE.* ) ALL
...
ALIAS --------- CEE.SCEEBIND                                                      
     IN-CAT --- ICFCAT.PLEXH.CATALOG3                                             
     HISTORY                                                                      
       RELEASE----------------2     CREATION--------0000.000                      
     ENCRYPTIONDATA                                                               
       DATA SET ENCRYPTION-----(NO)                                               
     ASSOCIATIONS                                                                 
       SYMBOLIC-PP.ADLE370.&SYSLEVEL..SCEEBIND                                    
       RESOLVED-PP.ADLE370.ZOS204.SCEEBIND                                        

So we can see that the catalog uses system symbolics: SYSLEVEL. On the current system this is ZOS204. When zOS205 is used, the symbolic will be updated, and all of the datasets will get the new value. The resolved value (what it is now) is RESOLVED PP.ADLE370.ZOS204.SCEEBIND