Various TLS return codes

When debugging TLS problems I got various return codes. I’m collecting them here, so I can find them next time I have a problem.

I’d be happy to add to any problems and solutions you find, please let me know.

TLS Handshake failure

Alert 40

Wireshark produced

  • Alert Message
    • Level: Fatal (2)
    • Description: Handshake Failure (40)

Looking in the CTRACE I got

No SSL V3 cipher specs enabled for TLS V1.3

See tls-1-3-everything-possibly-needed-know. This has

just five recommended cipher suites:

  • TLS_AES_256_GCM_SHA384
  • TLS_CHACHA20_POLY1305_SHA256
  • TLS_AES_128_GCM_SHA256
  • TLS_AES_128_CCM_8_SHA256
  • TLS_AES_128_CCM_SHA256

Alert 51

With TLS 1.3, A certificate like

SUBJECTSDN(CN('10.1.1.2') - 
O('NISTEC256') -
OU('SSS')) -
ALTNAME(IP(10.1.1.2))-
NISTECC -
KEYUSAGE( HANDSHAKE ) -
SIZE(256 ) -
SIGNWITH (CERTAUTH LABEL('DOCZOSCA')) -
WITHLABEL('NISTEC256')

Failed. But changing it to SIZE(512) worked. Strange, because size 512 is supposed to be supported.

Debug details

From the CTRACE

 ICSF service failure: CSFPPKS retCode = 0x8, rsnCode = 0x2b00                                            

S0W1 MESSAGE 00000004 10:25:45.006617 SSL_ERROR
Job TCPIP Process 0001003B Thread 00000003 crypto_sign_data
crypto_ec_sign_data() failed: Error 0x03353084

S0W1 MESSAGE 00000004 10:25:45.006883 SSL_ERROR
Job TCPIP Process 0001003B Thread 00000003 construct_tls13_certificate_verify_message
Unable to generate certificate verify message: Error 0x03353084

S0W1 MESSAGE 00000004 10:25:45.007124 SSL_ERROR
Job TCPIP Process 0001003B Thread 00000003 send_tls13_alert
Sent TLS 1.3 alert 51 to ::ffff:10.1.0.2.43416.

in z/OS Unix the command

grep 03353084 /usr/incl/gsk

gave

/usr/include/gskcms.h:#define CMSERR_ICSF_SERVICE_FAILURE         0x03353084          

The ICSF API points to return codes. 2B00 (11008) says

The public or private key values are not valid (for example, the modulus or an exponent is zero or the exponent is even) or the key could not have created the signature (for example, the modulus value is less than the signature value). In any case, the key cannot be used to verify the signature.

Changing to

Policy agent

...
ServerCertificateLabel NISTECC521
...
RACDCERT ID(START1) GENCERT -                             
SUBJECTSDN(CN('10.1.1.2') -
O('NISTECC256') -
OU('SSS')) -
ALTNAME(IP(10.1.1.2))-
NISTECC -
KEYUSAGE(HANDSHAKE ) -
SIZE(256) -
SIGNWITH (CERTAUTH LABEL('DOCZOSCA')) -
WITHLABEL('NISTECC256')

worked.

I needed to do F CPAGENT,REFRESH to pickup the change. I needed to refresh the policy agent, because I was using TN3270, which uses AT-TLS.

Session just ends with no alert

Looking at the CTRACE output I got

S0W1      MESSAGE   00000004  12:52:55.333904  SSL_ERROR                                  
Job TCPIP Process 0201001E Thread 00000001 crypto_chacha_encrypt_ctx
ICSF service failure: CSFPSKE retCode = 0x8, rsnCode = 0xbfe

S0W1 MESSAGE 00000004 12:52:55.334123 SSL_ERROR
Job TCPIP Process 0201001E Thread 00000001 crypto_chacha_encrypt_ctx
The algorithm or key size is not supported by ICSF FIPS

S0W1 MESSAGE 00000004 12:52:55.334355 SSL_ERROR
Job TCPIP Process 0201001E Thread 00000001 gsk_encrypt_tls13_record
ChaCha20 Encryption failed: Error 0x0335308f

The return code 0xbfe is

The PKCS #11 algorithm, mode, or keysize is not approved for ICSF FIPS 140-2. This reason code can be returned for PKCS #11 clear key requests when ICSF is in a FIPS 140-2 mode or 140-3,HYBRID mode. To see how 8/BFE(3070) can be returned when the ICSF FIPSMODE is 140-3,HYBRID, see ‘Requiring FIPS 140-2 algorithm checking from select z/OS PKCS #11 applications’ in z/OS Cryptographic Services ICSF Writing PKCS #11 Applications.

FIPS was incorrectly specified. For example FIPS-140 with TLS 1.3

How do you download and use a dataset from z/OS.

Transferring a dataset from z/OS to Windows or Linux and using it can be a challenge.

A record in a data set on z/OS has a 4 byte Record Descriptor Word on the front of the record. The first two bytes give the length of the record (and the other two bytes are typically 0)

FTP has two modes for transferring data ASCII and BIN.

ASCII

With ASCII mode, FTP reads the record,

  • Removes the RDW
  • Converts it from EBCDIC to ASCII
  • Adds a “New Line” character to the end of data
  • Sends the data
  • Writes the data to a file stream.

On Unix and Windows a text file is a long stream of data. When the file is read, a New Line character ends the logical record, and so you display the following data on a “New Line”.

Binary mode

Binary mode is used when the dataset has hexadecimal content, and not just printable characters. The New Line hex character could be part of a some hexadecimal data, so this character cannot be used to delineate records.

FTP has an option for RDW

quote site RDW

The default is RDW FALSE.

If RDW is FALSE then FTP removes the RDW from the data before sending it. At the remote end, the data is a stream of data, and you have no way of identifying where one logical record ends, and the next logical record starts.

If RDW is TRUE, then the 4 byte RDW is sent as part of the data. The application reading the file can read the information and calculate where the logical record starts and ends.

For example on z/OS the dataset has (in hex) where the bold data is displayed when you edit or browse the dataset. The italic data is not displayed.

00040000C1C2C3C4
00020000D1CD2
00050000E1E2E3E4E5

If the data was transmitted with RDW FALSE the data in the file would be

C1C2C3C4D1D2E1E2E3E4E5

If the data was transmitted with RDW TRUE the data in the file would be

00040000C1C2C3C400020000D1CD200050000E1E2E3E4E5

Conceptually you can process this file stream using C code:

short RDW;  // 2 byte integer
short dummy; // 2 byte integer

RDW = fread(2); // get the length
dummy = fread(2); // ignore the 0s
mydata = fread(RDW);

...
RDW = fread(2); // get the length
dummy = fread(2); // ignore the 0s
mydata = fread(RDW);

In practice this will not work because z/OS has numbers which are Big Endian, and X86 and ARM machines are Little Endian. (With Big Endian – the left byte is most significant, with Little Endian, the right bit is most significant – the bytes are transposed.)

On z/OS 0x0004 is decimal 4. On X86 and ARM 0x0400 is 4.

In practice you need code on X86 and ARM, like the following, to get the value of a half word from a z/OS data set.

char RDW[2];  // 2 characters
RDW = fread(2); // get the length
length = 256 * RDW[0] + RDW[1]

and similarly for longer integers.

Python

If you are using the Python struct facility, you can pass a string of data types and get the processed values.

  • The string “>HH” says two half words, and the > says the numbers are Big Endian.
  • The string “<HH” says two half words and the < says they are Little Endian
  • The string “HH” says two half words – read in the default representation.

Conversion

You’ll need to do your own conversion from EBCDIC to ASCII to make things printable!

How do you trust a file?

I was asked this question by someone wanting to ensure their files have not been hacked. In the press there are articles where bad guys have replaced some code with code that steals credentials, or it allows an outsider access to your machine. One common solution to trusting a file uses cryptography.

There are several solutions that do not work

Checking the date of a file.

This does not work because there are programs that allow you to change the date and time of files.

Checking the number of bytes

You can list a file’s properties. One property is the size of the file. You could keep a list of file, and file size.

There are two problems

  1. You can change the contents of the file without changing the size of the file. I’ve done this. Programs used to have a patch area where skilled people could write some code to fix problems in the program.
  2. Someone changes the size of the file – but also changes your list to reflect the new size, and then changes the date on file and your list so they look as if they have not changed.

Hashing the file contents

Do a calculation on the contents of the file. A trivial function to implement and easy to exploit, is to treat each character as an unsigned integer, and add up all of the characters.

A better hashing function is to do a calculation cs = mod(c **N,M). For example when the current character is 3, n is 7 and m is 13; find the remainder of 3*3*3*3*3*3*3 when divided by 13, the answer is 3. N and M should be very large. Instead of using one character you take 8 or more. You then apply the algorithm on the file.

cs = 0
do 8 bytes of the file at a time
cs = mod((cs + the 8 bytes)** N,M)
end
display cs

Some numbers N and M are better that others. Knowing the value cs, you cannot go back and recreate the file.

If you just store the checksum value in a file, then the bad guys can change this file, and replace the old checksum with the new checksum of the file with their change. It appears that doing a checkum on the file does not help.

Cryptography to the rescue

To make things secure, there are several bits of technology that are required

  • Public and private keys
  • How do I trust what you’ve sent me

Public and private keys

Cryptography has been around for thousands of years. This typically had a key which was use to encrypt data, and the same key could be used to decrypt the data.

The big leap in cryptography was the discovery of asymmetric keys where you need two keys. One can be used for encryption, and you need another for decryption. You keep the one key very secure (and call it the private key) and make the other key publicly available (the public key). Either key can be used to encrypt, and you need the other key to decrypt.

The keys can be used as follows

  • You encrypt some data with my public key. It can only be decrypted by someone with my private key.
  • I can encrypt some data with my private key and sent it to you. Anyone with my public key can decrypt it. In addition, because they had to use my public key, then they know it came from me (or, to be more accurate, someone with my private key).

How do I trust what you’ve sent me

I would be very suspicious if I received an email saying

This is your freindly bank heree. Please send us your bank account details with this public key. Pubic keys are very safe and we are the only peoples who can decrypt what you send me.

Digital certificates and getting a new passport

A public certificate has

  • Your name
  • You address such as Country=GB, Org=Megabank.com,
  • Your public key
  • Expiry date
  • What the certificate can be used for

I hope the following analogy explains the concepts of digital certificates.

Below are the steps required to get a new passport

  • You turn up at the Passport Office with your birth certificate, a photograph of you, a gas bill, and your public certificate.
  • The person in the office checks
    • that the photo is of you.
    • your name is the same as the birth certificate
    • the name on the gas bill matches your birth certificate
    • the address of the gas bill is the same as you provided for your place of residence.
  • The office creates the passport, with information such as where you live (as evidenced by the gas bill)
  • The checksum of your passport is calculated.
  • The checksum is encrypted with the Passport Office’s PRIVATE key.
  • The encrypted checksum and the Passport Office’s PUBLIC key are printed, and stapled to the back of the passport
  • The passport is returned to you. It has been digitally signed by the Passport Office.

How do I check your identity?

At the back of MY passport is the printout of the Passport Offices’ public key. I compare this with the one attached yo your passport – they match!

I take the encrypted checksum from your passport, and decrypt it using the Passport Office’s public key (yours or mine – they are the same). I write this on a piece of paper.

I do the same checksum calculation on your passport. If the value matches what is on the piece of paper, then you can be confident that the passport has not been changed, since it was issued by the Passport Office. Because I trust the Passport Office, I trust they have seen your birth certificate, and checked where you live, and so I trust you are who you say you are.

But..

Your passport was issued by the London Passport Office, and my passport was issued by the Scottish Passport Office, and the two public certificates do not match.

This problem is solved by use of a Certificate Authority(CA)

Consider a UK wide Certificate Authority office. The Scottish Passport Office sent their certificate (containing, name address and public key) to the UKCA. The UKCA did a checksum of it, encrypts the checksum with the UKCA PRIVATE key, attached the encrypted checksum, and the UKCA public certificate to the certificate sent in – the same process as getting a passport.

Now when the Scottish Passport office process my passport, they do the checksum as before, and affix the Scottish Passport Offices’ public certificate as before… but this certificate has a copy of the UKCA’s certificate, and the encrypted checksum stuck to it. The passport now has two bits of paper stapled to it, the Scottish Passport Office’s public certificate, and the UKCA’s public certificate.

When I validate your passport I see that the London Passport office’s certificate does not match the Scottish Passport Offices certificate, but they have both been signed by the UKCA.

  • I compare the UKCA’s public certificates – they match!
  • I decrypt the checksum from the London office using the UKCA’s certificate and write it down
  • I do the same checksum calculation on the London offices’s certificate and compare with what is written down. They match – I am confident that the UKCA has checked the credentials of the London office
  • I can now trust the London certificate, and use it to check your passport as before.

What happens if I do not have the UKCA certificate

Many “root” certificates from corporations, are shipped on Windows, Linux, z/OS, Macs etc. The UKCA goes to one of these corporations, gets their certificate signed, and includes the corporations certificate attached to the UKCA certificate. Of course it costs money to get your certificate signed by one of these companies

You could email the UKCA certificate with the public key to every one you know. This has the risk that the bad guys who are intercepting your email, change the official UKCA certificate with their certificate. Plan b) would be to ship a memory stick with the certificate on it – but the same bad guys could be monitoring your mail, and replace the memory stick with one of theirs.

How does this help me trust a file?

The process is similar to that of getting a passport.

My “package” has two files abx.txt and xyz.txt

At build time

  • Create the files abc.txt and xyz.txt
  • Calculate the checksum of each file, and encrypt the value – this produces a binary file for each abc.txt.signature
  • Create a directory with
    • Your public certificate/public key
    • A directory containing all of the signature files
    • A list of all of the files in the signature directory
    • A checksum of the directory listing. directory.list.signature

You ship this file as part of your product.

When you install the package

  • Validate the certificate in the package against the CA stored in your system.
  • Decrypt the list of files in the directory (directory.list.signature). Check the list of files is valid
  • For each line in the directory list, go through the same validations process with the file and it’s signature.

For the paranoid

Every week calculate the checksum of each file in the package and sent it to a remote site.

At the remote site compare the filename, checksum combination against last week’s values.

If they do not match, the files have been changed.

Of course if your system has been hacked, the bad guys may be intercepting this traffic and changing it.

How do I do it?

I have a certificate mycert.pem, and my private key mycert.private.pem. It was signed by ca256.

Build

Run the command against the first file

openssl dgst -sign mycert.key.pem abc.txt   > abc.txt.signature

Move the abc.txt.signature to the package’s directory,

Create the trust package

/
.. /mycert.pem
.. /directory.list.txt
.. /directory.list.txt.signature
.. /signatures/
.. .. /abc.txt.signature
.. .. /xyz.txt.signature

Validate the package

Validate the certificate in the package.

openssl verify -CAfile ca256.pem mycert.pem 

extract the public key from the certificate.

openssl x509 -pubkey -noout -in mycert.pem > mycert.pubkey

validate the checksum of the abc file using the public key.

openssl dgst -verify ./mycert.pubkey  -signature abc.txt.signature  abc.txt

Does it work with data sets ?

On z/OS I created a signature file with

openssl dgst -sign tempcert.key.pem  "//'COLIN.JCL(ALIAS)'"  > jcl.alias.signature

and validated it with

openssl dgst -verify tempcert.pubkey -signature jcl.alias.signature  "//'COLIN.JCL(ALIAS)'"   

Formatting SYSADATA from HLASM

I wanted to extract information about DSECTS from the SYSADATA output from compiling an assembler program on z/OS.

On the whole it was pretty easy – but had some surprises!

I’ve put some Python code up on github. It runs on my z/OS.

Where is the record layout documented?

The record layout is documented in HLASM V1R6 Programmer’s Guide. Dsects for the various types are provided in HLA.SASMMAC1(ASMADATA).

I used record type 0x0042 for symbols. To get a record in this section of the ADATA it needs a label.

For example

      DSECT COLIN2
ABCD DS CL8
ABCDE DS CL8

This will not produce a record for the DSECT – because it does not have a label.

ESDID – section names

Each DSECT or CSECT will have an External Symbol Directory ID.

  • CSECT start at 1, and increment, so 1,2
  • DSECTs start at -1 and decrement 4294967295= 0xffffffff(-1), 4294967294 = 0xfffffffe(-2)

Field order

The order of records seems to be random. The CSECT/DSECT statement is often after some fields in the CSECT/DSECT.

To find the xSECT for each symbol, I saved the SECT name and ESDID, and post processed the list of symbols by adding the xSECT information afterwards from the ESDID.

Field offsets

The offsets in each record seem to be the offset from the first instruction. I had to save the offset from the CSECT statement, then post process the records to calculate (offset of symbol in CSECT) = symbol offset – start_of_CSECT offset.

“Problems”

Missing data

To get a record into the ADATA ensure it has a label.

Output from my code

     ESDID              Symbol  Offset  Length TypeA       SymType    CSECT
4294967295 ABCD 0 8 C OrdinaryLabel COLIN2
4294967295 COLIN2 0 1 J DSECT COLIN2
4294967295 ABCDE 8 8 C OrdinaryLabel COLIN2
1 CSQ6LOGP 0 1 J CSECT CSQ6LOGP
1 LOGP 0 8 D OrdinaryLabel CSQ6LOGP
1 LOGPID 0 2 R OrdinaryLabel CSQ6LOGP
1 LOGPLL 2 2 R OrdinaryLabel CSQ6LOGP
1 LOGPEID 4 4 C OrdinaryLabel CSQ6LOGP
1 LOGPMRTU 8 2 R OrdinaryLabel CSQ6LOGP
1 LOGOPT1 10 1 R OrdinaryLabel CSQ6LOGP
1 LOGOPT2 11 1 R OrdinaryLabel CSQ6LOGP
1 LOGPMCOF 12 2 R OrdinaryLabel CSQ6LOGP
1 LOGPOBPS 16 4 R OrdinaryLabel CSQ6LOGP
1 LOGPIBPS 20 4 R OrdinaryLabel CSQ6LOGP
1 LOGPARCL 24 4 R OrdinaryLabel CSQ6LOGP
1 LOGPWRTH 30 2 R OrdinaryLabel CSQ6LOGP
1 LOGPLVL 32 7 C OrdinaryLabel CSQ6LOGP
1 LOGPLVLN 39 1 R OrdinaryLabel CSQ6LOGP
1 LOGPDMIN 40 2 R OrdinaryLabel CSQ6LOGP
1 LOGPDSEC 42 2 R OrdinaryLabel CSQ6LOGP
1 LOGPCOMP 44 4 R OrdinaryLabel CSQ6LOGP
1 LOGPEND 256 1 U EQU CSQ6LOGP

FIPS, TLS 1.3, AT-TLS, z/OS and not connecting.

Or, My TLS connection just dies during the handshake – because of FIPS!

I was working with John M. on a problem connecting a client machine to talk to z/OS TN3270, and this identified some “interesting” holes.

  • The root cause is that on z/OS 3.1 and earlier AT-TLS does not support FIPS with TLS 1.3.
  • There is support in z/OS 3.2 for FIPS 140-3.
  • The cards in ICSF need to be configured for FIPS. If they are not configured, the sessions will fail with a trace entry in the CTRACE output saying “FIPS not supported” or some other vague message.
  • You can use the operator command D ICSF,CARDS to display the status.
  • You can use the ISPF panels.
    • In ISPF option 6 type the command @ICSF. This displays the ICSF main panel.
    • Option 1 COPROCESSOR MGMT
    • It displays your co-processors.
    • Use the S line command on the co-processors
    • If you get a message like FIPS Compliance Mode          : NOT SUPPORTED. You need to reconfigure your co-processors.
  • To configured FIPS, it is a destructive reset, and all master keys will be reset. This needs to be carefully planned.

Steps to solving the problem

You can use tools like Wireshark to display the traffic, and sometimes see why a TLS handshake fails.

Many of the problems I experienced were due to configuration problems on z/OS. I got a CTRACE trace on z/OS, see GSK trace and TCPIP and this usually allowed me to fix the problem.

Alert (40)

Alert Message:Level: Fatal (2): Description: Handshake Failure (40)

I used the gsksrvr ctrace to find that I did not have any TLS 1.3 certificates in my configuration.

Alert (51)

With TLS 1.3, A certificate like

SUBJECTSDN(CN('10.1.1.2') - 
O('NISTEC256') -
OU('SSS')) -
ALTNAME(IP(10.1.1.2))-
NISTECC -
KEYUSAGE( HANDSHAKE ) -
SIZE(256 ) -
SIGNWITH (CERTAUTH LABEL('DOCZOSCA')) -
WITHLABEL('NISTEC256')

Failed. But changing it to SIZE(512) worked. Even though size 256 is supported.

Using TLS 1.3, the handshake to TN3270 failed with no reason.

I tracked down some problems due to FIPS being enabled.

FIPS standards establish requirements for ensuring computer security and interoperability, and are intended for cases in which suitable industry standards do not already exist.

I think of FIPS as taking the existing standards and making them a bit more secure. For example not allowing some cipher suites. Not allowing certificates with small keys.

Enabling FIPS properly does not look easy. For example the documentation says it requires that load modules are cryptographically signed, so code authorised programs can check they have not been changed. Under the covers I believe that when IBM ships a module, it calculates the hash of the code, then encrypts the hash, and stores the encrypted has within the loadmodule. At runtime you use IBM’s public key to decrypt this value; does the same hash on the module, and compares this.

Once this has been done, you can add statements to the ICSF configuration, such as FIPSMODE(YES,FAIL(YES)).

This says use FIPS, and if any checking fails – fail the request.

In z/OS 3.2 there is FIPS support for TLS 1.3 see option FIPSMODE(140-3,INDICATE,FAIL(fail-option))

Not all configurations are supported

The TLS 1.3 ciipher suites, ChaCha20 and ChaCha20-Poly1305 are not supported by FIPS. You need to use cipher suites, configured with AES-GCM or AES-CCM.

I ran my test using FIPS

I could see in Wireshark that there was the TLS 1.3 trace

  • ClientHello request going to the server
  • ServerHello coming from the server
  • Change Cipher spec coming from the server
  • and nothing. No Alert message.

I found an entry in the z/OS 2.5 documentation.

The FIPS 140-2 standard does not define support for TLSv1.3 or the new cipher suites defined for it. Enabling both the TLSv1.3 protocol and FIPS support results in an error.

When my request failed I got CTRACE entries like

S0W1      MESSAGE   00000004  12:52:55.333904  SSL_ERROR                                  
Job TCPIP Process 0201001E Thread 00000001 crypto_chacha_encrypt_ctx
ICSF service failure: CSFPSKE retCode = 0x8, rsnCode = 0xbfe

S0W1 MESSAGE 00000004 12:52:55.334123 SSL_ERROR
Job TCPIP Process 0201001E Thread 00000001 crypto_chacha_encrypt_ctx
The algorithm or key size is not supported by ICSF FIPS

S0W1 MESSAGE 00000004 12:52:55.334355 SSL_ERROR
Job TCPIP Process 0201001E Thread 00000001 gsk_encrypt_tls13_record
ChaCha20 Encryption failed: Error 0x0335308f

Where the return code 0xbfe is

The PKCS #11 algorithm, mode, or keysize is not approved for ICSF FIPS 140-2. This reason code can be returned for PKCS #11 clear key requests when ICSF is in a FIPS 140-2 mode or 140-3,HYBRID mode. To see how 8/BFE(3070) can be returned when the ICSF FIPSMODE is 140-3,HYBRID, see ‘Requiring FIPS 140-2 algorithm checking from select z/OS PKCS #11 applications’ in z/OS Cryptographic Services ICSF Writing PKCS #11 Applications.

May the FIPS code is badly implemented, by not producing an alert message such as “FIPS processing problem”, but some security products to not display error information, because it makes it easier to break in!

Vim crib sheet for an ISPF user

If you have any suggestions for this post – please let me know.

If you use z/OS Unix from ISPF, you can use the ISPF editor. If you use z/OS Unix through an SSH session, you do not have access to ISPF, and need to use native Unix facilities.

From an SSH session, to display files you can use cat, head, tail or the less commands. To change text the most common editor is Vim.

Vim is an enhanced vi; neovim is an enhanced vim – for example good for people who want to write vim plugins.

If you are starting out, use the vim editor. Initially use it to browse files instead of tools like less, head or tail. Then graduate to making change.

There are some good tutorials around for how to use vim.

I’ve tried to give the bits which are not covered and get you started. This is a good crib sheet of all of the commands

Getting started

There are different modes (normal, insert, replace, visual, select). Sometimes you can enter text into the main window, sometimes you cannot. See the bottom line to the left.

  • — INSERT you can insert text in the main window. Press the Insert key to get into this mode. Press clear key to return to normal mode.
  • blank – this is normal/command mode. If you position the cursor on a word in the file and type dw it will delete the word. As you type commands, the text will be displayed on the bottom line.
  • — VISUAL you can use the mouse (or keys) to select text
    • v allows you to select text
    • V allows you to select lines
    • Ctrl-V allows you to rectangles of data. Use Ctrl-V then move the cursor

If the screen is mostly read only, as you type commands they appear in the middle of the bottom line

  • Edit a file using vim file_name
  • Quit
    • :q (you may have to press escape then type :q)
    • if you’ve changed the file :q!
  • Save changes :w
  • Save changes and quit :wq
  • Page down and page up work like PF7 and PF8 (on my 3270 emulator I have <keyPress>Next mapping to PF(“8”)
  • The up and down cursor keys move the cursor, and data.
  • You can use the :help command to display comprehensive information.
  • Use Escape :set wrap or Escape :set nowrap to wrap the text or not
  • Use Escape :set nu or Escape :set nonu to display line numbers
  • Move the data (in normal mode – press Escape on ore more times)
    • one char to the left under the window zl
    • one char to the right under the window zh
    • 20 char to the left under the window z20l
    • 20 char to the right under the window z20h
    • half a window to the left under the window zL
    • half a window to the right under the window zH
  • Top of the file gg
  • Bottom of the file G
  • Search
    • Find next /value (enter)
    • Use * to find the next word under the cursor
    • Use # to find previous word under the cursor
    • Find backwards ?value (enter)
    • Find word /\<theword\>
    • / followed by up or down arrow shows history of search
    • Highlight all values found :set hlsearch
    • To ignore case for one search, append \c after the search pattern. /ISPF/\c
    • To force case sensitivity, append uppercase \C after the pattern. /ISPF/\C
    • To set ignore case for the session :set ic
    • To reset case :set noic
  • Find and replace
    • Change one instance on the current line :s/{pattern}/{replacement}/
    • Change all instances on the current line :s/{pattern}/{replacement}/g
    • Change all instances in the file :%s/{pattern}/{replacement}/g
    • Change instances in a range :%3,6s/{pattern}/{replacement}/g
    • Change instances in a range from current line to line 6 %.,6s/{pattern}/{replacement}/g
    • Change instances in a range from current line to the end :%.,$s/{pattern}/{replacement}/g
    • Change instances in a range from current line in the next 3 lines :%.,+3s/{pattern}/{replacement}/g
    • Change instances in a range from current line to the top :%.,1s/{pattern}/{replacement}/g
    • Show history :s
  • What were the last commands I entered? : followed by Ctrl-P (same as ISPF retrieve key)
  • Undo: in command mode (escape)
    • u undo last change
    • 3u undo last 3 changes
    • U undo change to the line
    • Ctrl-R redo
  • Label lines. In ISPF you can label lines, such as .a in the prefix command.
    • Mark the current line as the single letter a ma
    • Display all of the marked lines :marks
    • Jump to mark a `a

Profile

You can store profile commands in $HOME/.vimrc

Mapping keys

You can map keys to commands. See here

How do I extract the WLM definitions to print, and to compare?

WLM on z/OS has a good ISPF interface, and old technology way of printing the configuration to an ISPF listing.

I wanted to compare the WLM configuration from one system with another system. It took a while to get this working.

I’ve put some code on github to process an XML file. It is basic but it works. If people find this useful, I could expand and improve it.

Output the WLM report as XML

Go to the main WLM ISPF panel.

   File  Utilities  Notes  Options  Help
---------------------------------------
Functionality LEVEL011 Definiti
Command ===> __________________________

Definition data set . . : none

Definition name . . . . . ETPwlm (R
Description . . . . . . . ETP WLM Poli

Select one of the following options.
__ 1. Policies
2. Workloads
  • Select the FILE pull down.
  • Select option 4 Save as
  • It may display a screen of Errors / warnings. Press PF3
  • It displays a pop-up “Save to…”. Specify a data set name and select Save format 1 (for XML)

The file is a sequential FB 80 file.

Download this to your work station (or run the Python on z/OS).

Create JSON format data from the file

I used the Python code

import xmltodict
import json
file="wlm.xml"
with open(file,"r") as myfile:
data=myfile.read()
data = data.replace('\n',"")
book_dict = xmltodict.parse(data)
json_data = json.dumps(book_dict,indent=1,sort_keys=True)
# print(json_data)
with open("data.json", "w") as json_file:
json_file.write(json_data)

This reads the file “wlm.xml” and creates a file “data.json”

This created

{
"ServiceDefinition": {
"@xmlns": "http://www.ibm.com/xmlns/prod/zwlm/2000/09/ServiceDefinition.xsd",
"ApplicationEnvironments": {
"ApplicationEnvironment": [
{
"Description": "WebSphere Application Server",
"Limit": "NoLimit",
"Name": "BBOC001",
"ProcedureName": "BBO5ASR",
"StartParameter": "JOBNAME=&IWMSSNM.S,ENV=ADCD.ADCD.&IWMSSNM",
"SubsystemType": "CB"
},
...
"ClassificationGroups": {
"ClassificationGroup": [
{
"CreationDate": "1999/11/15 19:19:26",
"CreationUser": "TODD",
"Description": "Production Batch High - Med",
"ModificationDate": "1999/11/16 11:17:21",
"ModificationUser": "TODD",
"Name": "BATHIM",
"QualifierNames": {
"QualifierName": [
{
"Description": "All SMP/E jobs",
"Name": "SMP*"
},

...
"Workloads": {
"Workload": [
{
"CreationDate": "1999/11/15 16:55:08",
"CreationUser": "TODD",
"Description": "All batch workloads",
"ModificationDate": "1999/11/15 16:55:08",
"ModificationUser": "TODD",
"Name": "BATCH",
"ServiceClasses": {
"ServiceClass": [
{
"CPUCritical": "No",
"CreationDate": "1999/11/15 17:17:54",
"CreationUser": "TODD",
"Description": "High Batch - med",
"Goal": {
"Velocity": {
"Importance": "2",
"Level": "60"
}
},
...

You can now use your favourite tools for extracting data and formatting.

Using Pandas

The Python code below produced simple reports.

import pandas as pd
import xmltodict
import json
file="wlm.xml"
with open(file,"r") as myfile:
data=myfile.read()
data = data.replace('\n',"")
book_dict = xmltodict.parse(data)

rg = book_dict["ServiceDefinition"]["Workloads"]["Workload"]

dd = pd.DataFrame.from_records(rg)
#pd.set_option('display.max_rows', 500)
#pd.set_option('display.max_columns', 500)
#pd.set_option('display.width', 1000)
print(dd)

Gave me

       Name  ...                                     ServiceClasses
0 BATCH ... {'ServiceClass': [{'Name': 'BATHIM', 'Descript...
1 DB2RES ... {'ServiceClass': {'Name': 'STPCDDF', 'Descript...
2 IZUGWORK ... {'ServiceClass': [{'Name': 'IZUGHTTP', 'Descri...
3 SERVERS ... {'ServiceClass': [{'Name': 'SRVHIM', 'Descript...
4 STARTED ... {'ServiceClass': [{'Name': 'NOTRUN', 'Descript...
5 TSOOTHER ... {'ServiceClass': [{'Name': 'OTHER01', 'Descrip...

This shows that with a little Python code you can produce useful reports. People with experience of Pandas can format it better.

See code on github to produce basic reports

How do I change the PFKeys on the console?

I want to set a PFKEY to display (but not execute a command) because I could not remember the syntax of the command.

You setup the PFKEYS in a table such as USER.PARMLIB(PFKTAB00)

PFKTAB TABLE(COMMANDS) 
PFK(01) CMD('K E,1')
PFK(02) CMD('K E')
PFK(03) CMD('K E,D')
PFK(04) CMD('K D,F')
PFK(05) CMD('K S,DEL=R')
PFK(06) CMD('K S,DEL=RD')
PFK(07) CMD('D A,L')
PFK(08) CMD('D R,R,CN=(ALL)')
PFK(09) CMD('K D,U')
PFK(10) CMD('V TCPIP,TCPIP,OBEYFILE,USER.TCPPARMS(ROUTE)') CON(Y)
PFK(11) CMD('K E')
PFK(12) CMD("%NETV SHUTSYS") CON(Y)
...
PFK(24) KEY(12)
PFKTAB TABLE(COLIN)
PFK(01) CMD('K E,2')
...

Where PF12 is the command I want to specify – CON(Y) means confirm before executing it.

PFK(24) KEY(12) 

Says make PFK24 the same as PFK12

Multiple tables

You can have multiple definitions (tables) within a file (COMMANDS and COLIN)

The console command

K N,PFK=COLIN

Says – from the currently selected member use the table COLIN.

Use a different member

You can create different members (in addition to having multiple tables within a member)

Issue the commands

T PFK=CP

to set the table to use member PFKTABCP

The command

D PFK,T

displays the tables within the member, for example

PFK TABLES IN PFKTAB00 AVAILABLE FOR USE ON SYSTEM VS01               
TABLE TABLE TABLE TABLE TABLE TABLE
COLIN COMMANDS

You can display the PFKeys for a table using

d pfk,t=COLIN

You can activate a table within a PFKTAB using

  K N,PFK=COLIN

How to use PFKEY tables

You could set up multiple tables in one member, or have multiple members each with one table. For example a table for common TCPIP commands, a table for general commands (including the Netview shutdown command).

You can specify CON(Y) to allow you to change the statement, before executing it. For example in my TCPIP table I have

PFK(10) CMD('V TCPIP,TCPIP,OBEYFILE,USER.TCPPARMS(ROUTE)') CON(Y)

to allow me to change which TCPIP configuration statements to use.

Why is the wrong TCPIP Resolver proc being used?

What is the resolver?

The resolver task provides local mapping for URL names to IP addresses. It means you can provide your own mapping for URLs. You can chose to have mapping go to a Domain Name Server and look up the URL; but I just wanted to control which URLs can be used.

For example

GLOBALTCPIPDATA – /etc/resolv.conf has

nameserver 8.8.8.8 
nameserver 1.1.1.1

GLOBALIPNODES – /etc/hosts has

151.101.128.223        pypi.org    pip 
151.101.192.223 pypi.org pip
151.101.192.223 files.pythonhosted.org pipfiles
20.26.156.215 github.com
151.101.1.91 curl.se
185.199.110.133 raw.githubusercontent.com
185.199.110.133 release-assets.githubusercontent.com
169.63.188.167 downloads.pyaitoolkit.ibm.net

10.1.1.2 STD1.ibm.com
127.0.0.1 localhost

The above are needed for zopen to work.

The started task

There is a started task RESOLVER in SYS1.PROCLIB and in USER.PROCLIB. Although USER.PROCLIB takes precedence over SYS1.PROCLIB, the SYS1.PROCLIB version is started.

It took me an hour or so to work out why.

SYS1.PARMLIB(BPXPRM00)

This member defined the OMVS configuration, such as which file systems to define, and which files systems to mount etc.

I have a parameter RESOLVER_PROC(RESOLVER). The documentation says

  • Specifies how the resolver address space is processed during z/OS UNIX initialization.
  • nnnnn The name of the address space for the resolver and the procedure member name in the appropriate proclib. procname is one to eight characters long. The procedure must reside in a data set that is specified by the MSTJCLxx parmlib member’s IEFPDSI DD card specification.

My MSTJCL00 parmlib member has

//MSTJCL00 JOB  MSGLEVEL=(1,1),TIME=1440                     
// EXEC PGM=IEEMB860,DPRTY=(15,15)
//STCINRDR DD SYSOUT=(A,INTRDR)
//TSOINRDR DD SYSOUT=(A,INTRDR)
//IEFPDSI DD DSN=SYS1.PROCLIB,DISP=SHR
//IEFPARM DD DSN=SYS1.PARMLIB,DISP=SHR
//SYSUADS DD DSN=SYS1.VS01.UADS,
// DISP=SHR
//SYSLBC DD DSN=SYS1.VS01.BRODCAST,
// DISP=SHR

According to the documentation above the RESOLVER_PROC(RESOLVER) will look for member RESOLVER in SYS1.PROCLIB.

Removing the RESOLVER_PROC from my BPXPRM00 did not solve the problem, because there is a default value.

DEFAULT: Causes an address space named RESOLVER to start, using the system default procedure of IEESYSAS. The address space is started with SUB=MSTR so that it runs under the MASTER address space instead of the JES address space.

There is an option RESOLVER_PROC(NONE), but TCPIP startup waits for the resolver – and so your IPL waits until you start the resolver.

The easy fix is easy

Stop and restart the resolver

P RESOLVER
S RESOLVER

A better fix is to update the member in SYS1.PROCLIB, however because on my configuration IBM can refresh SYS1.PROCLIB my changes could be overwritten.

Improving the resolver procedure

When I was looking into the problem I saw that the configuration files used were in /etc/.

When IBM refreshes the z/OS system, it will replace the /etc directories, so it is better not to store my configuration in /etc/. I changed it so the procedure only used my personal datasets.

My resolver JCL is

//* TCPIP RESOLVER - COLINS 
//*
//RESOLVER PROC PARMS=CTRACE(CTIRES00)
//*
//EZBREINI EXEC PGM=EZBREINI,REGION=0M,TIME=1440,
// PARM=('&PARMS',
// 'ENVAR("RESOLVER_TRACE=/var/log/resolver"/')
//SETUP DD DISP=SHR,DSN=COLIN.TCPPARMS(GBLRESOL),FREE=CLOSE
//SYSTCPT DD SYSOUT=*
//SYSPRINT DD SYSOUT=*
//SYSOUT DD SYSOUT=*
//*

The configuration is in COLIN.TCPPARMS(GBLRESOL).

This member now looks like

  DEFAULTTCPIPDATA('COLIN.TCPPARMS(GBLTDATA)') 
GLOBALTCPIPDATA('COLIN.TCPPARMS(RESOLVE)')
# GLOBALTCPIPDATA(/etc/resolv.conf)
;
# -----------------------------------------------------------------
# Default zPDT Linux Base to z/OS Tunnel (Stand-Alone)
# -----------------------------------------------------------------
;
# GLOBALIPNODES(/etc/hosts)
GLOBALIPNODES('COLIN.TCPPARMS(HOSTS)')
....

Where the members COLIN.TCPPARMS(RESOLVE) and COLIN.TCPPARMS(HOSTS) contain the information.

When you start the resolver task you get information like

EZZ9298I RESOLVERSETUP - COLIN.TCPPARMS(GBLRESOL)                 
EZZ9298I DEFAULTTCPIPDATA - COLIN.TCPPARMS(GBLTDATA)
EZZ9298I GLOBALTCPIPDATA - COLIN.TCPPARMS(RESOLVE)
EZZ9298I DEFAULTIPNODES - COLIN.TCPPARMS(ZPDTIPN1)
EZZ9298I GLOBALIPNODES - COLIN.TCPPARMS(HOSTS)
EZZ9304I COMMONSEARCH
EZZ9304I CACHE
EZZ9298I CACHESIZE - 200M
EZZ9298I MAXTTL - 2147483647
EZZ9298I MAXNEGTTL - 2147483647
EZZ9304I NOCACHEREORDER
EZZ9298I UNRESPONSIVETHRESHOLD - 25
EZZ9291I RESOLVER INITIALIZATION COMPLETE

so you can see what configuration is being used.

Help, I cannot logon to my z/OS system.

I have several versions of z/OS on my zD&T system, and I needed to go back to an earlier version. Unfortunately I cannot remember the passwords to my userids. I know the passwords were a combination of upper case, and lower case, and a few random numbers and punctuation; and it was good in that I didn’t write them down.

Fortunately, I have a standalone z/OS system which I can use to access the old system and reset the passwords.

These are the steps I took. I hope they work on other people’s systems. They may not, because of different configurations. You should have backups of both copies of your emergency RACF databases.

You should do no other security work while doing this change, because you might change the wrong RACF database.

Overview

On a z/OS system you can have two RACF databases configured, for example for recovery reasons.

I configured the RACF database as the backup on my emergency system, switched to use it, reset the password, switched the system back again.
I could then IPL the old system, and use the password I had specified. This may not work for all environments, but it works for me.

The steps

The old RACF database needs to be cataloged and made available in the emergency system.

  • Mount the volume containing the old RACF database on the emergency system.
  • Use ISPF 3.4, specify the database name, and volume, and use the C prefix command to catalog it. My old database name is SYS1.COLIN.Z24C.RACF.
  • Change your RACF configuration see Plan B – REIPL with different data sets.
  • Reipl

Logon to TSO with an authorised user such as IBMUSER

Issue the #RVARY command

This gave

ICH15013I RACF DATABASE STATUS:
ACTIVE USE NUM VOLUME DATASET
------ --- --- ------ -------
YES PRIM 1 B3CFG1 SYS1.COLIN.RACFDB.Z31B
NO BACK 1 *DEALLOC SYS1.COLIN.Z24C.RACF

Issue

#RVARY ACTIVE,DATASET(SYS1.COLIN.Z24C.RACF)

It gave me

*01 ICH702A ENTER PASSWORD TO ACTIVATE RACF JOB=RACF     USER=START1    
R 1 SUPPRESSED
ICH15013I RACF DATABASE STATUS:
ACTIVE USE NUM VOLUME DATASET
------ --- --- ------ -------
YES PRIM 1 B3CFG1 SYS1.COLIN.RACFDB.Z31B
YES BACK 1 CCPVOL SYS1.COLIN.Z24C.RACF

The password on my system was YES

Issue

#RVARY SWITCH

It prompts for the password again.

It gave

ICH15013I RACF DATABASE STATUS:                          
ACTIVE USE NUM VOLUME DATASET
------ --- --- ------ -------
YES PRIM 1 CCPVOL SYS1.COLIN.Z24C.RACF
NO BACK 1 *DEALLOC SYS1.COLIN.RACFDB.Z31B

Issue

#RVARY INACTIVE,DATASET(SYS1.COLIN.RACFDB.Z31B )

This gave me

ICH15013I RACF DATABASE STATUS:
ACTIVE USE NUM VOLUME DATASET
------ --- --- ------ -------
YES PRIM 1 CCPVOL SYS1.COLIN.Z24C.RACF
NO BACK 1 *DEALLOC SYS1.COLIN.RACFDB.Z31B

From an authorised TSO userid I issued

ALU IBMUSER PASSWORD(PASSW9RD)

I then “undid” all of the steps I had done before

  • #RVARY ACTIVE,DATASET(SYS1.COLIN.RACFDB.Z31B )
  • #RVARY SWITCH
  • #RVARY INACTIVE,DATASET(SYS1.COLIN.Z24C.RACF)
  • REIPL with your normal RACF datasets.