I’ve started LDAP now what do I do?

Contents

What is LDAP?

I found this a good introduction on LDAP; the structure of the data, searching and filters.

I’ve written up

Setting up LDAP on z/OS

I created a new LDAP instance on z/OS see getting started with LDAP on z/OS, and the definitions and JCL I used to create LDAP. I used the standard schema /usr/lpp/ldap/etc/schema.user.ldif and the IBM extensions /usr/lpp/ldap/etc/schema.IBM.ldif which give you the attributes for working with RACF etc.

Now what do I do?

It is much easier to set up your LDAP structure properly, before you start adding in lots of records, rather than try to change the structure it once you have populated it with all your data. You could be agile, develop your LDAP data, back up the data, and “just” recreate the LDAP repository once you know what you want. Where “just” means write Python or Rexx scripts to take the LDAP data and convert to the new format, for example adding additional information to every definition before adding it to the dictionary.

Because I did not fully understand how Access Control Lists work, I managed to make some of my data invisible to the end user requests, so it is easy to make mistakes when you do not know what you are doing. This blog should give you some hints about setting up your LDAP environment, and avoid some of the rework.

Background to LDAP

LDAP is a generalised directory with an application interface over IP.

The data is held in a hierarchical(upside down tree) form, for example the top of the tree may be called o=myorg. Where o stands for Organisation.

You configure this top of the tree in the LDAP config file for example

# this defines a file based database
database LDBM GLDBLD31/GLDBLD64
# this says it can use RACF for password checking
useNativeAuth all
#this is the top of the tree
suffix “o=myorg
# this is the location on disk of the database
databaseDirectory /var/ldap/ldbm

The next levels down might be

  1. ou=users, o=myorg
  2. ou=groups,o=myorg
  3. ou=corporate data,o=myorg

The data for the first subtree could be stored in DB2, the data for the second subtree could be in files, and the data for the third subtree could be in another LDAP, somewhere else.

A “record” or leaf of the tree could be identified by

dn:cn=colin paice,c=GB,ou=users,o=myorg

Where

  • cn= is the common name
  • c= is the country name
  • ou= is the organisational unit
  • o= your organisation.

With each record you need one or more objectTypes. Object types have attributes. For example an objectType of person can have an attribute telephoneNumber. If you want to use telephoneNumber you need an objectType that supports it.

A typical entry might be

dn: cn=mq, o=Your Company
objectclass: top
objectclass: organizationalPerson
cn: mqadmin
telephoneNumber: 1234567
telephoneNumber: 987654321
sn: mqadmin

Logging on with z/OS userid and password.

I set up LDAP so I could logon to the Linux queue manager, and use my z/OS userid and password. For this I had

dn: cn=ibmuser, o=Your Company
objectclass: top
objectclass: person
objectclass: organizationalPerson
objectclass: ibm-nativeAuthentication
cn: ibmuser
sn: snibmuser
ibm-nativeId: IBMUSER

Where

  • objectclass: ibm-nativeAuthentication is for the RACF authorisation
  • ibm-nativeId: IBMUSER says use the RACF userid IBMUSER when cn=ibmuser, o=Your Company is used.

I also set up an Access Control List entry for this userid, so it can search, and read entries

dn: o=Your Company
changetype: modify
aclEntry : access-id:cn=ibmuser, o=Your Company:normal:grant:rscw

This says

  • for the subtree under the distinguished name of o=Your Company. DNs of ou=groups,o=myorg, and ou=users,o=myorg would be more typical subtree names.
  • the dn cn=ibmuser, o=Your Company, the dn of the user for this ACL. This would normally be a group rather than a userid. You can have multiple entries for each dn.
  • has read, search and compare and write on normal fields. A social security number is “sensitive” field, and a password is a “critical” field. This ACL only gives access to normal fields.

What do you want in a year’s time?

It is much easier to set up your LDAP structure properly before you start, rather than try to change the structure it once you are using it.

For example you could have a flat tree with entries like

  • dn:cn=colin paice,o=myorg for users
  • dn:cn=mqadmin,o=myorg for groups

This will quickly become hard to manage. You may find the following are better.

  • dn:cn=colin paice,ou=users,o=myorg for users
  • dn:cn=mqadmin,ou=groups,o=myorg for groups
  • dn:cn=testmqadmin,ou=groups,o=myorg for a different group with the department name in it.

or

  • dn:cn=colin paice,c=gb,ou=users,o=myorg for users by country
  • dn:cn=mqadmin,ou=test,ou=groups,o=myorg for test groups

Administration

You can give access to administer nodes in the tree, at the subtree level. For example for the sub node in the tree ou=test,ou=groups,o=myorg might have administrators.

  • cn=hqadmin,ou=groups,o=myorg
  • cn=colin paice,c=gb,ou=users,o=myorg

Having a structure like dn:cn=mqadmin,ou=test,ou=groups,o=myorg means you can give the test manager admin control to this test groups, but the test manager has no authority over the production groups. If you had a structure like dn:cn=mqadmin,ou=groups,o=myorg. It makes it much harder to separate the responsibilities.

For the top of the tree o=myorg, you could set up only the following group has administration.

  • cn=hqadmin,ou=groups,o=myorg

From a performance perspective it will be cheaper and faster to access data in a subtree, rather than search the whole tree – bearing in mind you could have millions of entries in the tree.

Note: The adminDN userid in the LDAP config file has authority over every thing. The ACLs on the tree, or subtree, or record define who has administration authority.

Controlling access

You control access using Access Control Lists. An ACL looks like

dn: ou=users,o=myorg
changetype: modify
replace: aclEntry
aclEntry : access-id:cn=ibmuser, o=Your Company:
  object:ad:normal:grant:rscw:sensitive:rscw:
  critical:rscw:restricted:rscw
aclEntry: group:cn=authenticated:normal:rsc

This defines the access for the subtree ou=users,o=myorg

  • cn=ibmuser, o=Your Company can add delete entries under the subtree; and use any of the fields including sensitive ( eg social security number) and critical (eg password).
  • group: cn=authenticated any user who has authenticated can read, search and compare on normal fields. They cannot see or select on sensitive or critical fields.
  • object:a|d is to add or delete objects in the subtree.
  • normal:… sensitive… critical… these give access to the fields in the data ( so you can issue an ldap_search for example). You can specify <grant:|deny:> attributes, and so give access or remove access
  • restricted: To update ACLs you must have read and write access.
  • You can specify what access people have to individual fields, so you can give them access to most fields, but deny access to specific fields.

The userid defined in the LDAP configuration file under adminDN can change anything. When I messed up my data, I changed the config file to use adminDN cn=admin and password secret1, stopped and restarted the LDAP server, and fixed my problem. I undid the changes to the config file, stopped and restarted the server, to get back to normal operation.

You need to plan on what access you want, at which levels of the tree, and who has what access to which fields.

If is better to give access to groups, and add users to groups, than to give access to userids. For example, if you have many ACLs, if someone joins the team, changing the group to add the id is one change. If you have to change each ACL, and add the id, you may have many changes.

What is going to use it?

Programs using LDAP may have requirements on the data structure. For example MQ can use userid and group information in LDAP. It expects the data to be under a single subtree. You could specify groups are to be found under, ou=test,ou=groups,o=myorg. You cannot say under ou=test,ou=groups,o=myorg and ou=production,ou=groups,o=myorg. Similary, users would be under ou=users,o=myorg. If you specified the subtree c=gb,ou=users,o=myorg, then this limits users to having a GB userid, which may not be what you want.

Using groups

Although LDAP on z/OS can support static groups (with a list of members), nested groups containing groups, and dynamic groups ( where you can say those users with ou=production), the group of groups and dynamic groups cannot be used to check group membership.

You can query and display all users in all group types.

You can say give me the only the group names, where colinPaice is a member; when the group is a static group, not groups of groups, nor dynamic groups. This makes managing groups much harder, and you may need to have bigger groups than specific smaller groups.

You cannot exploit the flexibility of the nested and dynamics groups.

What fields do you want in the records.

If you have a definition like

dn: cn=ibmuser, o=Your Company
objectclass: top
objectclass: person
objectclass: organizationalPerson
objectclass: ibm-nativeAuthentication
cn: ibmuser
sn: snibmuser
ibm-nativeId: IBMUSER

If you want to use the ibm-nativeId field to give the RACF userid to use, then you need objectclass: ibm-nativeAuthentication.

If you have ibm-nativeAuthentication you must have ibm-nativeId, and may have other fields.

The “must” and “may” fields are defined in a schema.

The schema /usr/lpp/ldap/etc/schema.IBM.ldif has

objectclasses: (
  NAME 'ibm-nativeAuthentication'
  DESC 'Indicates native security manager should be used during authentication.'
  SUP top
  AUXILIARY
  MUST ( ibm-nativeId )
)

The objectClass: person has

objectclasses: (
  NAME 'person'
  DESC 'Defines entries that generically represent people.'
  MUST ( cn $ sn )
  MAY ( userPassword $ telephoneNumber $ seeAlso $ description )
)

This means you must provide a cn and an sn entry, and you can provide other entries

The schema can give information about an attribute

attributetypes: (
  NAME ( 'sn' 'surName' )
  DESC 'This is the X.500 surname attribute, which contains the family name of a person.'
  EQUALITY caseIgnoreMatch
  ORDERING caseIgnoreOrderingMatch
  SUBSTR caseIgnoreSubstringsMatch
)

This shows that for ordering or comparing, it ignores the case of the data, so “colin paice” is the same as “COLIN PAICE”.

You need to decides what fields you want, and how you want the fields to be processed, for example case, and comparison.

You can write your own schema for fields that are unique to your organisation.

What you use the field for is up to you. For example “sn” could be surname, or”short name”. You just have to be consistent and document it.

What tools are there to help me?

You can use the tools provided with LDAP to administer LDAP. For example the ldap_modify command can be used to process a batch of definitions; whole records, or attributes within records.

Eclipse has a plugin Apache Directory Studio. It works well, and looks like it is highly recommended. This plugin allows you to browse and manage entries. I could not get it to display the schema.

How do I backup/export the data

You can use the ds2ldif command. It creates a file in ldif format which can be used to add back all the records (using the ldap_add or ldap_modify commands).

Colin’s list of MQ messages

This blog post is my annotations to MQ messages, containing descriptions of what I did to get an MQ message, and what I did to fix the problem. It is meant for web search programs, rather than humans.

I will extend it as I experience problems.

AMQ7026E: A principal or group name was invalid.

I could display an auth entry

dspmqaut -m qml -t qmgr -g “cn=dynamic, o=Your Company”


Entity cn=dynamic, o=Your Company has the following authorizations for object qml:
connect

but not delete it

setmqaut -m qml -t qmgr -g “cn=dynamic,o=Your Company” -connect

AMQ7026E: A principal or group name was invalid.

I had set this up using LDAP, but MQ was not able to find the record in LDAP. This is because I had changed the configuration. There was an LDAP search

(&(objectClass=groupOfNames)(OU=cn=dynamic, o=Your Company))

It could not be found because there was no LDAP entry with objectClass=groupOfNames with an entry (OU=cn=dynamic, o=Your Company)

I had changed

DEFINE AUTHINFO(MYLDAP) +
AUTHTYPE(IDPWLDAP) +
GRPFIELD(sn)

to

GRPFIELD(ou)

So when MQ came to delete it, it looked for

(&(objectClass=groupOfNames)(OU=cn=dynamic, o=Your Company))

instead of

(&(objectClass=groupOfNames)(SN=cn=dynamic, o=Your Company))

and could not delete it.

I changed the LDAP group to add the OU, and the SETMQAUT command worked.

AMQ5532E: Error authorizing entity in LDAP

EXPLANATION:
The LDAP authorization service has failed in the ldap_first_entry call while
trying to find user or group ‘NULL’. Returned count is 0. Additional context is
‘cn=mqadmin,ou=groups,o=your Company’.

Colin’s comments

I had defined a dynamic group, by specifying the group name as part of the LDAP user entry.

When I changed the authinfo object yo have nestgrp(yes), I got the above message because there was no record for cn=mqadmin,ou=groups,o=your Company’.

Define the record with the appropriate object class as defined by the MQ AUTHINFO attribute CLASSGRP. (CLASSGRP(‘groupOfNames’) in my case).

AMQ5530E: Error from LDAP authentication and authorization service

EXPLANATION:
The LDAP authentication and authorization service has failed. The
‘ldap_ssl_environment_init’ call returned error 113 : ‘SSL initialization call
failed’. The context string is ‘keyfile=”/var/mqm/qmgrs/qml/ssl/key.kdb”
SSL/TLS rc=408 (ERROR BAD KEYFILE PASSWORD)’. Additional code is 0.

Colin’s comments.

I had changed the keyring using alter qmgr CERTLABL(ECRSA1024) SSLKEYR(‘/home/colinpaice/mq/zzserver’)

AMQ5530E: Error from LDAP authentication and authorization service

EXPLANATION:
The LDAP authentication and authorization service has failed. The
‘ldap_simple_bind’ call returned error 49 : ‘Invalid credentials’. The context
string is ‘10.1.1.2:389 ‘. Additional code is 0.

Colin’s comments

An anonymous logon to LDAP was attempted (LDAPUSER and LDAPPWD omitted) and the LDAP server had allowAnonymousBinds off.

Specify userid and password.

API: 2460 (099C) (RC2460): MQRC_HMSG_ERROR during get

I had this during an MQGET when the GMO was not using Version:4.

This would apply to PMO not using Version:3.

Reason 2464: FAILED: MQRC_IMPO_ERROR

MQIMPO impo = {MQIMPO_DEFAULT};

I got this return code because I had a C program on z/OS and compiled it using the ASCII option. This meant the IMPO eye catcher was ASCII 0x494D504F)  instead of EBCDIC 0xC9D4D7D6.

MQIMPO impo = {MQIMPO_DEFAULT};

// if we are in ascii mode we need to convert eyecatcher from ASCII to
// EBCDIC
__a2e_l(impo.StrucId,sizeof(impo.StrucId));

This also applies to

2482: FAILED: MQRC_PD_ERROR

2440 (0988) (RC2440): MQRC_SUB_NAME_ERROR

I got this because my data was in EBCDIC, but I had specified the code page as 437 ( ASCII).

AMQ9669E The PKCS #11 token could not be found

AMQ9669E The PKCS #11 token could not be found. Severity 30 : Error Explanation The PKCS #11 driver failed to find the token specified to MQ in the PKCS #11 token label field of the GSK_PKCS11 SSL CryptoHardware parameter. The channel is <insert_3>; in some cases its name cannot be determined and so is shown as ‘????’. The channel did not start. Response Ensure that the PKCS #11 token exists with the label specified. Restart the channel.

I got this when I had the following in my mqclient.ini

SSL:

SSLCryptoHardware=GSK_PKCS11=/usr/lib/x86_64-linux-gnu/pkcs11/opensc-pkcs11.so\;UserPIN (mytoken)\;12345678\;SYMMETRIC_CIPHER_ON\;

I commented out the SSLCryptoHardware… (I used #SSLCryptoHardware…) and it worked.

Z/OS messages

CSQY010E %CSQ9 CSQYASCP LOAD MODULE CSQYARIB IS NOT AT THE CORRECT
RELEASE LEVEL

If you get CSQYARIB then this can be caused by running Beta code past its validity date.

CSQX630E %CSQ9 CSQXRESP Channel ???? requires SSL

The chinit needs some SSL tasks and a keyring

%csq9 ALTER QMGR SSLTASKS(5) SSLKEYR(MQRING)

Restart the CHINIT.

Return codes

3015 (0BC7) (RC3015): MQRCCF_CFST_PARM_ID_ERR

IBM Documentation: Explanation. Parameter identifier is not valid. The MQCFST Parameter field value was not valid.

Colin’s comment. I passed a value which was valid, but the context was not valid. For example I issued a PCF command to start SMDSCONN. On the console the command gave

CSQM174E M801 CSQMSSMD ‘SMDSCONN’ KEYWORD IS NOT ALLOWED WITH CFLEVEL(4) – THIS KEYWORD REQUIRES CFLEVEL(5)

3229 (0C9D) (RC3220) MQRCCF_PARM_VALUE_ERROR

I got this trying to use MQCMD_INQUIRE_Q.

I had used qtype=MQOT_LOCAL_Q = 1004 when I should have used MQQT_LOCAL = 1

The PCF return message told me the incorrect parameter and value

2019 (07E3) (RC2019): MQRC_HOBJ_ERROR

I got the MQRC_HOBJ_ERROR when using MQSUB to subscribe to a topic. This is described here.

Colin’s comment

I got this because the queue I was using was not consistent with the subscription definition. For example

  • The subscription was using a managed subscription and I was using a queue
  • The queue name I specified in the queue handle did not match the queue name in the subscription

You can use the DISPLAY SUB(..) DEST DESTCLAS

A managed subscription will have a DEST(SYSTEM.MANAGED.DURABLE…) and DESTCLAS(MANAGED).

When using a queue you will have DEST(COLINSUBQ) DISTYPE(RESOLVED)

CSQX690I Cipher specifications based on the SSLv3 protocol are disabled
CSQX694I Cipher specifications based on the TLS V1.0 protocol are disabled
CSQX668I Cipher specifications based on the TLS V1.2 protocol are disabled
CSQX670I Cipher specifications based on the TLS V1.3 protocol are disabled
CSQX693I Weak or broken SSL cipher specifications are enabled

I started a preconfigured CHINIT and got these messages. I immediately thought “that’s wrong”.
People should be running with TLS 1.2 and TLS 1.3 – with the aim to migrate of TLS 1.2 to TLS 1.3

These were enables because of DD statements in the CHINIT JCL

//CSQXWEAK DD DUMMY 
//CSQXSSL3 DD DUMMY
//TLS10ON DD DUMMY


See Deprecated CipherSpecs


LDAP error messages and codes

This blog post is a repository for the LDAP error codes I experienced, and the actions I took to resolve the problems.

LDAP return codes

Messages include return codes like “3”, but the LDAP programming book has terms like “LDAP_PARAM_ERROR”.

These are defined in

/usr/include/ldap*.h,

SSL initialization failures reason codes.

https://www.ibm.com/docs/en/zos/2.5.0?topic=utilities-ssltls-information-ldap-client#idg18488__failrc

GLD1342E Unwilling to open file or directory ‘/var/ldap/schema’:

File or directory UID 0, UID of program 990023, GID of file or directory 1, GIDs of program (990018).

Colin’s comments

The LDAP started task expects to be the file owner of the /var/ldap/* files. On ADCD they were OMVSKERN:OMVSGRP. I used

chown -R gldsrv:gldgrp /var/ldap/*

to change the file owner.

Object class violation: additional info: R001026 No structural object class specified for ‘cn=ibmuser, o=Your Company’.

Colin’s comments

I had an ldif file with

dn: cn=mq, o=Your Company
changetype: add
objectclass: top
#objectclass: person
#objectclass: organizationalPerson
objectclass: ibm-nativeAuthentication
cn: mq
telephoneNumber: 1234567
telephoneNumber: 12345672
sn: Administrator

And no proper object class. When I uncommented person or organizationalPerson it worked.

R001030 Entry contains attribute ‘ibm-nativeid’ which is not allowed for object class

I was trying to add ‘ibm-nativeid’ to an entry. This attributed belongs to object class ibm-nativeAuthentication. The object has to have this object class, for example, add the lines in the bold font.

dn: cn=colin, o=Your Company
objectclass: top
objectclass: person
objectclass: organizationalPerson
objectclass: ibm-nativeAuthentication
cn: LDAP Administrator
sn: Administrator
ibm-nativeId: COLIN

Credentials are not valid: R004062 Credentials are not valid

By accident I overwrote my administration userid definition.

Colin’s comments

Edit GLD.CNFOUT(DSCONFIG) ( or what every config file you are using)

Comment out the adminDN, and add in the cn=admin and its password.

From

adminDN “cn=ibmuser, o=Your Company”
# adminDN “cn=Admin”
# adminPW secret

to

# adminDN “cn=ibmuser, o=Your Company”
adminDN “cn=Admin”
adminPW secret

  • Stop and restart LDAP.
  • Fix the userid
  • Change the admin definitions back
  • restart LDAP.

R003070 Access denied because user does not have ‘write’ permission for all modified attributes

I used a command like ldapmodify -a -h 127.0.0.1 -p 389 -D “cn=adcda, o=Your Company” -w adcdapw1 -f mqacl.* to change some definitions, but the userid cn=adcda,o=Your Company did not have the correct permissions.

You can enable the LDAP acl trace using f gldsrv,debug 128, and reset it using f gldsrv,debug 0

To change/add/delete an ACL the id needs restricted:rscw

For example

dn: o=Your Company
changetype: modify
replace: aclEntry
aclEntry : access-id:cn=ibmuser, o=Your Company:
object:ad:normal:grant:rscw:sensitive:rscw:critical:rscw
aclEntry : access-id:cn=adcda, o=Your Company:
object:ad:normal:rscw:sensitive:rscw:critical:rscw:restricted:rscw

Insufficient access: R003057 Access denied because user does not have ‘add’ permission for the parent entry

Trying to add an entry.

The userid is not authorised to add an entry. It needs an acl with object:ad ( a for add, d for delete)

dn: o=Your Company
changetype: modify
replace: aclEntry
aclEntry : access-id:cn=ibmuser, o=Your Company:
object:ad:normal:rscw:sensitive:rscw:critical:rscw
aclEntry : access-id:cn=adcda, o=Your Company:
object:ad:normal:rscw

GLD1116E Unable to initialize an SSL connection with …: 515 – Key share list is not valid.

My GSK_CLIENT_TLS_KEY_SHARES GSK_SERVER_TLS_KEY_SHARES environment variables had an invalid value. They had 0021 which is not supported in TLS 1.3.

Look in the gsktrace

GLD1063E Unable to initialize the SSL environment: 416 – Permission denied.

GLD1160E Unable to initialize the LDAP client SSL support: Error 113, Reason -17.

ICH408I USER(GLDSRV ) GROUP(GLDGRP ) IRR.DIGTCERT.LISTRING CL(FACILITY) INSUFFICIENT ACCESS AUTHORITY ACCESS INTENT(READ ) ACCESS ALLOWED(NONE )

In the LDAP config file I had sslKeyRingFile START1.MQRING. The userid GLDSRV did not have read access to the list ring facitity IRR.DIGTCERT.LISTRING CL(FACILITY)

permit IRR.DIGTCERT.LISTRING CL(FACILITY) id(GLDSRV)
SETROPTS RACLIST(FACILITY) REFRESH

GLD1160E Unable to initialize the LDAP client SSL support: Error 113, Reason 705

I had GSK_OCSP_CLIENT_CACHE_SIZE=10000, when I set it to 100, it worked.

GLD1160E Unable to initialize the LDAP client SSL support: Error 113, Reason 2.
GLD1063E Unable to initialize the SSL environment: 202 – Error detected while opening the certificate database.

  • reason code 2: Keyring open error
  • SSL return code 202: Keyring open error

Actions

  • Check value specified
  • Check access
    • rdefine rdatalib START1.MQRING.LST UACC(NONE)
    • SETROPTS RACLIST(RDATALIB) REFRESH
    • permit START1.MQRING.LST class(RDATALIB) ACCESS(READ) id(GLDSRV)

Check the keyring exists ( list the contents of it)

RACDCERT LISTRING(name) ID(COLIN)

Get out a gsk trace .

  • Add GSK_TRACE=0xff to the env file.
  • By default the output goes to gskssl.*.trc
  • Format it using gsktrace gskssl.*.trc gsktrace.out
  • oedit gsktrace.out search for ERROR. I had ERROR gsk_open_keyring(): IRRSDL00 GetData failed: SAF 8, RC 8, Reason 84.
  • These are documented here. 8 8 84 means keyring not found.

GLD1160E Unable to initialize the LDAP client SSL support: Error 113, Reason -99.

Reason -99 is GSK_ERROR_UNKNOWN_ERROR!

I got this when trying to use OCSP with LDAP. I had

GSK_OCSP_RESPONSE_SIGALG_PAIRS=0601050305010804

If I remove the 0804 ( or 0806 or 0805) then startup got past this message.

GLD1116E Unable to initialize an SSL connection 8 – Certificate validation error.

Colin’s comments

I got this many times for many different reasons.

I had Certificate Revocation List processing enabled.

In the GSK trace I had

ERROR check_crl_issuer_extensions(): crlSign bit is not set in KeyUsage
ERROR check_revoked(): Unable to verify CRL issuer extensions: Error 0x03353026

03353026 Incorrect key usage.

Explanation: The key usage certificate extension does not permit the requested key operation.

My CA was not defined properly I needed

keyUsage = keyCertSign, digitalSignature,cRLSign

GLD1116E Unable to initialize an SSL connection openssl SSL alert number 42

Colin’s comments

The certificate sent from the client was missing the authorityKeyIdentifier extension, because the CA certificate is missing.

In the -config xxx.cnf and the specified -extensions … or the default extensions

subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid:always, issuer:always

You need to change the CA, regenerate the end user certificate, and redistribute the CA.

533 – Remote partner indicates unsupported certificate.

GLD1116E Unable to initialize an SSL connection 402 – No SSL cipher

Colin’s comments 1

I could see from the gsktrace on zOS there was a message ERROR read_client_hello_cipher_select(): No intersection with client cipher suites.

This means the list of available cipher specs on the server did not include the one sent from the client.

Colin’s comments 2

The server’s certificate was not compatible with the list in GSK_V3_CIPHER_SPECS_EXPANDED. For example the list had only EC certificates, but the server was RSA.

Colin’s comments 3

I had a server certificate defined as

SIZE(521) NISTECC …

In the trace I had

EXIT gsk_get_ec_parameters_info(): <— Exit status 0x00000000 (0) EC curve type 34, key size 521
ERROR send_v3_alert(): Sent SSL V3 alert 40 to 10.1.0.2[38736]

INFO edit_ciphers(): Server certificate ec curve 0034 not in supported ecurve tls extension. EC cipher suites disabled

When I changed the size to 256 it worked, and used (C02C,C02B,C024,C023)

From here 0034 is TLS_DH_anon_WITH_AES_128_CBC_SHA

GLD1116E Unable to initialize an SSL connection with 10.1.0.2: 412 –
SSL protocol or certificate type is not supported.

Colin’s comments

The server had been configured for only GSK_PROTOCOL_TLSV1_3=on.

The GSKTRACE output has Client does not support TLS V1.3. No protocol version match found.

GLD1116E Unable to initialize an SSL connection with … 434 – Certificate key is not compatible with cipher suite.

Colin’s comments

The server’s certificate is not consistent with the certificate sent from the client.

For example the server is using an RSA certificate, but the client is sending and EC certificate.

For example server is an RSA certificate. From the list GSK_V3_CIPHER_SPECS_EXPANDED = C024C006C007C008c024c023c025130313011302009E

it chooses 009E. 009E is TLS_DHE_RSA_WITH_AES_128_GCM_SHA256 =

128-bit AES in Galois Counter Mode encryption with 128-bit AEAD authentication and ephemeral Diffie-Hellman key exchange signed with an RSA certificate.

When my elliptic certificate from the client comes in,

Signature Algorithm: SHA256withRSA, Key: Sun EC public key, 521 bits, parameters: secp521r1 NIST P-521

This is incompatible.

When the server certificate is an Elliptic certificate, then certificate C024 is used. C024 is

TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384 = 256-bit AES encryption with SHA-384 message authentication and ephemeral ECDH key exchange signed with an ECDSA certificate.

This works for both of them.

GLD1116E Unable to initialize an SSL connection 440 – Incorrect key usage.

Colin’s comments.

The server’s certificate was defined with KEYUSAGE(CERTSIGN,KEYAGREE)

When I added HANDSHAKE, recreated the certificate, and restarted the LDAP server it worked. KEYUSAGE(HANDSHAKE,CERTSIGN,KEYAGREE).

Certsign says this is a used as a CA.

GLD1116E Unable to initialize an SSL connection with 127.0.0.1: 533 – Remote partner indicates unsupported certificate.

Colin’s comments.

I got this when the GSK_TLS_SIG_ALG_PAIRS=”0403″ did match up with the server’s version.

In the gsktrace for the server I got TLS 1.3 alert 43 received from

In the client’s gsktrace it had

Certificate key algorithm 13, Signature algorithm 25
INFO read_tls13_certificate(): Using client’s signature algorithm list to check server certificate chain
ERROR read_tls13_certificate(): Signature algorithm 25 in server certificate not in client signature algorithms list
ERROR send_tls13_alert(): Sent TLS 1.3 alert 43 to …

in gskcms.h

  • x509_alg_ecPublicKey = 13,
  • x509_alg_sha256WithRsaEncryption = 25,

GLD1116E Unable to initialize an SSL connection with 127.0.0.1: 467 – Signature algorithm not in signature algorithm pairs list.

See previous for 553. The GSK_TLS_SIG_ALG_PAIRS from the client does not mach the server certitificate’s signature.

RACDCERT LIST(LABEL(‘SERVEREC’ )) id(start1)

gives

Signing Algorithm: sha256RSA

This table says 0401 SHA-256 with RSA, so this value is needed in the GSK_TLS_SIG_ALG_PAIRS.

GLD1116E Unable to initialize an SSL connection with …: 516 – No key share groups in common with parter

The configuration was

  • server GSK_SERVER_TLS_KEY_SHARES=0030
  • client GSK_CLIENT_TLS_KEY_SHARES=0024

Specify values with a common value.

TLS 1.3 supports 00300029002500240023

TLS 1.2 supports 00250024002300210019

So you could specify =002300240025

LDAPSEARCH client on Linux

ldap_search_ext: Bad search filter (-7)

with -b “o=Your Company” “&(objectClass=*)”

remove the &()s

-b “o=Your Company” “objectClass=*”

worked

ldap_sasl_interactive_bind_s: Unknown authentication method (-6)
additional info: SASL(-4): no mechanism available:

No certificate was sent from the client to the host.

ldap_sasl_interactive_bind_s: Can’t contact LDAP server (-1)
additional info: A TLS fatal alert has been received.

Colin’s comments 1

The list of certificate types the client sent up ( first part of the handshake) did not match any of the list of supported certificates in GSK_V3_CIPHER_SPECS_EXPANDED=009E002FC027c02dc023c025130313011302

I used Wireshark to display network traffic, and the the list of supported certificate types sent in the client hello.

z/OS gsktrace shows

Initial SSL V3 4-character cipher specs:
009E002FC027C02DC023C025130313011302
SSL V3 cipher C02D skipped due to key algorithm
SSL V3 cipher C023 skipped due to key algorithm
SSL V3 cipher C025 skipped due to key algorithm
SSL V3 cipher 1303 skipped for TLS V1.2 sessions
SSL V3 cipher 1301 skipped for TLS V1.2 sessions
SSL V3 cipher 1302 skipped for TLS V1.2 sessions
SSL V3 cipher specs: 009E002FC027
Using TLSV1.2 protocol
Using V3 cipher specification 009E

  • 009E is TLS_DHE_RSA_WITH_AES_128_GCM_SHA256
  • 002F is TLS_RSA_WITH_AES_128_CBC_SHA
  • C027 is TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256

and the z/OS default key had

Signing Algorithm: sha256RSA
Key Usage: HANDSHAKE, DATAENCRYPT, DOCSIGN
Key Type: RSA
Key Size: 4096

Colins’s comments 2

In the gsktrace I had

ERROR check_ocsp_signer_extensions(): extended keyUsage does not allow OCSP Signing

This is because the certificate used in the ocsp server, did not have

Extended Key Usage: critical, OCSP Signing

Resign the certicate and check the attribute has been set by

openssl x509 -in ocspcert.pem -text -nooutless

ldap_sasl_interactive_bind_s: Can’t contact LDAP server (-1)
additional info: An unknown public key algorithm was encountered.

Colin’s comments

As part of the “certificate verify”, the Signature Algorithm passed to the server, was not in the GSK_TLS_SIG_ALG_PAIRS list in the z/OS LDAP environment file.

Check all relevant are specified

GSK_TLS_SIG_ALG_PAIRS=060105010401030108060805080405030403

ldap_sasl_interactive_bind_s: Can’t contact LDAP server (-1)
additional info: (unknown error code)

I got this when using OCSP for certificate validation. OCSP sent down a flow from the server, and the ldapserver code was not expecting it, so ends.

Action: set

GSK_SERVER_OCSP_STAPLING=OFF

ldap_sasl_interactive_bind_s: Invalid credentials (49)


additional info: R004062 Credentials are not valid (srv_ssl_get_client_info:928)

Colin’s comments 1.

The TLS handshake was accepted, but the mapping of the DN to a userid did not return a userid.

Turn on LDAP trace using f GLDSRV,debug LDAPBE gave

LDAPBE srv_process_bind_request()374: do_bind msgID=1, connID=4, flags=0x22, controls=0x0, DN=”, authType=3, bindType=1, version=3
LDAPBE srv_process_bind_request()939: do_return_bind msgID=1, connID=4, bindDN=”, safUserID=”, dnList=0x0, grpList=0x0, rc=49: R004062 Credentials are not valid (srv_ssl_get_client_info:928)

ERROR srv_process_bind_request()957: Request failed OP code=0 bind=CN=secp521r,O=cpwebuser,C=GB

Map the certificate to a userid – note the ‘.’ in the SDNFILTER name.

//S1 EXEC PGM=IKJEFT01,REGION=0M
//STEPLIB DD DISP=SHR,DSN=SYS1.MIGLIB
//SYSPRINT DD SYSOUT=*
//SYSTSPRT DD SYSOUT=*
//SYSTSIN DD *
*RACDCERT LISTMAP ID(ADCDE)
*RACDCERT DELMAP(LABEL('LINUXECEC' )) ID(ADCDE)
*SETROPTS RACLIST(DIGTNMAP, DIGTCRIT) REFRESH
RACDCERT MAP ID(ADCDE ) -
   SDNFILTER('CN=secp521r.O=cpwebuser.C=GB') -
   WITHLABEL('LINUXECEC)
RACDCERT LISTMAP ID(ADCDE)
SETROPTS RACLIST(DIGTNMAP, DIGTCRIT) REFRESH
/*

Once you have defined the mapping, you do not need to restart LDAP, it is picked up on the next usage.

Colin’s comments 2.

The mapping of certificate to userid exists, but the userid is revoked, or otherwise not available.

ERROR srv_ssl_get_client_info() 902: safRc=8 racfRc=8 racfRsn=40

This is from R_usermap (IRRSIM00): Map application user

  • racfrsn 28 – Certificate is not valid.
  • racfrsn 40 – The Distinguished Name length is not valid, or the Distinguished Name string is all blanks (x’20’), all nulls (x’00’), or a combination of blanks and nulls.
  • racfrsn 48 – There is no distributed identity filter mapping the supplied distributed identity to a RACF user ID, or The IDIDMAP RACF general resource class is not active or not RACLISTed.

ldap_ssl_client_init failed! rc == 113, failureReasonCode == 2

This is not listed in table 7 of the LDAP client programming

I turned on trace using ldapsearch .. -d all …

and got

ERROR ldap_ssl_client_init()710: Unable to initialize SSL environment: Error 202
TRACE ldap_ssl_client_init()744: <= Status 113, Reason 2

Error code 202 is in the table = Keyring open error.

ldap_connect()409: Unable to initialize SSL connection to 127.0.0.1[1389]: Error 116, Reason -99

Colin’s comments

This was due to a mismatch in the GSK_TLS_SIG_ALG_PAIRS statement.

ldap_connect()409: Unable to initialize SSL connection to 127.0.0.1[1389]: Error 116, Reason -13

Colin’s comments

This was due to a mismatch in the supported versions of TLS.

ldap_connect()409: Unable to initialize SSL connection to 127.0.0.1[1389]: Error 116, Reason 438

ldap_ssl_socket_initUnable to initialize SSL connection: Error 456.

Colin’s comments

On the system log I had

ICH408I USER(COLIN ) GROUP(SYS1 ) NAME(COLIN PAICE )
CSFOWH CL(CSFSERV )
INSUFFICIENT ACCESS AUTHORITY
ACCESS INTENT(READ ) ACCESS ALLOWED(NONE )

I used the following to get access, and it worked.

permit CSFOWH class(CSFSERV) ACCESS(read) id(COLIN)
SETROPTS RACLIST(CSFSERV) REFRESH

Using groups for authority and access checking is so last century.

I’ve been exploring LDAP as a userid repository (as can be used by MQ multi platform). This got me into an interesting rabbit warren of Role Based Access Control (RBAC) and Attribute Based Access Control(ABAC), and how you set up your repository to hold userid and access information.

In LDAP on z/OS I can set up a user

dn: cn=adcda, o=Your Company
objectclass: top
objectclass: person
objectclass: organizationalPerson
objectclass: ibm-nativeAuthentication
sn:adcda
cn: mqadmin
ibm-nativeId: adcda

With this, if I try to logon with cn=adcda, o=Your Company . It will try to use RACF to check the password I specified and the userid in ibm-nativeId (adcda) is valid. I’ve logged on to MQ on Linux using this definition, and had RACF on z/OS check my password. (I thought this was pretty neat).

This definition has an attribute sn ( surName) of adcda and a cn (commonName) of mqadmin.

LDAP groups.

You can set up statics LDAP groups

These have a list of members

dn: cn=ldap_team_static,o=myorg
objectclass: groupOfNames
cn: ldap_team_static
member: cn=colin,o=myorg
member: cn=colin2,o=myorg

Groups within groups

A group can have a group name to be included

dn: cn=ldap_team_nested,o=myorg
objectclass: container
objectclass: ibm-nestedGroup
cn: ldap_team_nested
ibm-memberGroup: cn=ldap_team_static,o=myorg
ibm-memberGroup: cn=mq_team,o=myorg

You can display group members

ldapsearch …–b “ldap_team_nested,o=myorg” “objectclass=*” ibmallMembers

You can also have smart, dynamic groups

This is the “new” way of doing it – which has been available for about 30 years.

dn: cn=dynamic_team,o=Your Company
objectclass: groupOfUrls
cn: dynamic_team
memberurl: ldap:///o=Your Company?sub?(cn=mqadmin)

This says

  • query the tree under o=Your Company ( a more realistic subtree would be ou=users,o=Your Company)
  • sub, says all levels in the tree (base is search just the specified URL, one is search just one level below the specified URL)
  • list all those with the specified attribute cn = mqadmin.

Instead of updating a group – you add information into the user’s entry, and it would get picked up automatically.

Ideally there would be an LDAP attribute “role” which you could use for this. The default schemas do not have this.

RACF

If you are using RACF you can set up a userid, and connect it to a group. RACF does not support nested groups for authority and access checking.

Access to resources

Using group or Access Control List

Many systems provides group or Access Control List (ACL) to control access to a resource.

For example you might say users in group MQADMIN can update dataset MQ.JCL, and userids in group MQOTHER can read the MQ.JCL dataset.

This has limitations in that the resources are treated individually, so if you have 10 files, you have to grant a group access to 10 profiles.

Role Based Access Control(RBAC)

I struggled initially to see the difference between RBAC and group or ACLs.

With RBAC you do not give update or read access to a resource, you give access to a “task” or “role” like “Maintain records”, “client admin”, or “clerk”. You then give the tasks the appropriate access. You could implement this at a basic level using groups called MAINTREC, and CLIENTADM to give it update access to the resource, and group CLERK with a read access.

Attribute Based Access Control(ABAC)

ABAC seems to take this further. There are products you buy which can do this for you, but I could not see how it was configured or how it worked. Below is my interpretation of how I might configure it using LDAP.

You could have a user defined like

dn: cn=colin paice, o=Your Company
role: doctor
sn:adcda
cn: mqadmin
ibm-nativeId: adcda

A set of resources like

dn: cn=doctor update,o=Your Company
table: HOSPITAL.PATIENT
table:HOSPITAL.XRAYS.DATA
MQQueue:Surgery.queue

This is a list of resources a doctor needs to do their job.

And a set of rules

dn:cn=doctorsRules,o=Your Company
role: doctor
resource: cn=doctor update,o=Your Company
site:London
site:Glasgow
reason:Patient Update

If someone (Doctor Colin Paice) wants up update a patients record, you can do a query using dynamic groups

  1. What roles does Colin Paice have?
  2. What is the resource group for the DB2 table HOSPITAL
  3. Is there a valid rule for the list of roles for Colin Paice, with the resource group for the table HOSPITAL.PATIENTS, where access is from LONDON, and reason is Patient Update.

Is it that simple?

You have to be able to handle the case when a doctor may only look at the patients notes if the doctor is the “attending physician” – so the person is a patient of the doctor. This might mean a “patientOf: Doctors_name” field in the user’s record.

It looks like you have to be very careful in setting this environment up, as you could have many thousands of rules, and it could be very hard to manage.

Even if it is hard, I think the idea of virtual groups, were you select records based on a criteria is a good idea. It may be faster than using groups because it can exploit the index capability of the underlying database, rather than build lists of group membership.

One minute MVS: LDAP defining resources

Having set up an LDAP server, you need to add information to the directory. This is not very well described in the TDS documentation.

Basic overview of data

To add information about a user use a file in USS like colin.ldif

dn: cn=colin, o=Your Company
objectclass: top
objectclass: person
objectclass: organizationalPerson
cn: LDAP Administrator
sn: Administrator

Where

  • the “key” to identify an entry is the dn ..
  • objectclass is the sort of object, and what attributes it can have. It can have many object classes
  • cn: and sn: are attribute values
  • There is a blank line following to indicate end of definition. You can have many of these in a file, to allow you to do a bulk update.

How do I display the contents?

You need to issue a query. This comes in two parts, identifying the user, and the request.

To identify requestor you need something like

ldapsearch -h 127.0.0.1 -D “cn=ibmuser, o=Your Company” -w ? …

and the query for example, to list all the information about cn=ibmuser, o=Your Company add the following to the ldapsearch request above

-b “cn=ibmuser, o=Your Company” “(objectclass=*)”

This gives

cn=ibmuser, o=Your Company
objectclass=top
objectclass=person
objectclass=organizationalPerson
objectclass=ibm-nativeAuthentication
cn=ibmuser
sn=Administrator
ibm-nativeid=IBMUSER

For all information under o=Your Company

-b “o=Your Company” “(objectclass=*)”

For only the list of sn for all users

-b “o=Your Company” “(objectclass=*)” sn

This gives

o=Your Company

cn=colinw, o=Your Company
sn=Administrator

cn=colin, o=Your Company
sn=Administrator

cn=LDAP Administrator, o=Your Company
sn=Administrator

cn=ibmuser, o=Your Company
sn=Administrator

What authority do I need?

Typically you need to be an LDAP administrator, or have the appropriate access to Access Control lists. See here for managing ACLs.

How do I add information?

If I want to add a userid definition for ibmuser (above) so I can login with RACF, I need to add attribute

ibm-nativeId: COLIN

This attribute is in object type

objectclass: ibm-nativeAuthentication

So to be able to specify the ibm-native-ID: attribute, you need to tell specify the object class as well.

My definition is now

dn: cn=colin, o=Your Company
objectclass: top
objectclass: person
objectclass: organizationalPerson
objectclass: ibm-nativeAuthentication
cn: LDAP Administrator
sn: Administrator
ibm-nativeId: COLIN

Add it to the directory

You can add this to the directory using

ldapmodify -a -h … -p … -D “…” -w … -f colin.ldif

Where

  • -a says add (instead of modify)
  • -f colin.ldif is the name of the file with the statements in it.

Modifying an entry

If you want to modify an existing entry, you can change the whole entry, or parts of it.

To add an entry

dn: cn=colin, o=Your Company
changetype: add
objectclass: top

To delete a whole entry

dn: cn=colin, o=Your Company
changetype: delete

To add an attribute to an entry

dn: cn=colin, o=Your Company
changetype: modify
add: attrccp
attrccp: value1
attrccp: value2…

This adds two attrccp values to the definition

To modify an existing attribute

dn: cn=colin, o=Your Company
changetype: modify
modify: ibm-nativeId
ibm-nativeId: PAICE

To delete an attribute

dn: cn=colin, o=Your Company
changetype: delete
delete: ibm-nativeId

This deletes all ibm-nativeID attributes.

If you want to delete a specific attribute specify it after the delete: line

dn: cn=colin, o=Your Company
changetype: delete
delete: attrccp
attrccp: value2

One minute MVS. Getting started with LDAP on ADCD.

LDAP is a standard protocol for accessing directory information over TCP/IP. For example the command

ldapsearch -h 127.0.0.1 -D “cn=Admin, o=Your Company” -w secret -b “o=Your Company” “(objectclass=*)” aclEntry

This sends a request to IP address 127.0.0.1 with userid cn=… and password “secret”, for information under the subtree of “o=Your Company” and requests is sends back information on any ACL entries.

z/OS implementation

LDAP on z/OS is also know as Tivoli Directory Server.

It can run with different backend databased from DB2 to a files in a USS directory. It can interface to RACF so you can use query userid and group information from RACF through LDAP.

Schemas.

You need to configure a schema of what fields there are, and the relationship. For example, for an organisation telephone directory you might have

dn: cn=LDAP Administrator, o=Your Company
objectclass: organizationalPerson
cn: LDAP Administrator
sn: Administrator
userPassword: ********
phoneNumber:1234567

Where

  • dn: cn=LDAP Administrator, o=Your Company This is the internal name of the object, and what part of the data tree it belongs to “o=Your Company
  • objectclass: organizationalPerson defines the object type
  • cn: LDAP Administrator This is the common name ( nick name) of the object
  • sn: Administrator This is the surname of the person
  • userPassword: ******** This is the user’s password. It has been defined that the value is not displayed
  • phoneNumber:1234567 This has been defined so that is can only take numbers and ‘-‘.

You can define your own attributes and properties. You just need to update the schema.

Which database is used?

A sample LDAP configuration might contain

database LDBM GLDBLD31/GLDBLD64
suffix “o=Your Company”
databaseDirectory /var/ldap/ldbm

  • database LDBM says there is a database in a USS directory
  • GLDBLD31/GLDBLD64 are the names of the interface routines to use.
  • suffix “o=Your Company” is the root of the subtree in this database
  • databaseDirectory /var/ldap/ldbm is the name of the USS directory

You can configure LDAP to say for these names(o=someoneElsesCompany) go to another LDAP with at this address.

If I use a query like ldapsearch -h 127.0.0.1 -D “cn=LDAP Administrator, o=Your Company” -w secret -b “o=Your Company” “(objectclass=*)” aclEntry…. the -D cn=LDAP Administrator, o=Your Company” says look for a userid with the given data in the o=Your Company subtree. With the above definitions it would look in the the USS file system under /var/ldap/ldbm, for a userid cn=LDAP Administrator, o=Your Company.

Configuring an LDAP server on ADCD.

ADCD is a preconfigured system which can on on zPDT and ZD&T. These provide a system 390 emulator. This system comes with a lot of software installed, and some subsystems such as z/OS, MQ, DB2, IMS, CICS and z/OSMF pre configured.

The software for LDAP(Tivoli Directory Server) is installed but not configured. The documentation is extensive, and the configuration file is very large (with lots of comments). You run a configuration script which produces some files.

However for a simple configuration you only need a few files to run.

Some of these files do not work – for example they try to define a userid with an existing Unix uid.

I’ve taken the updated files and put them on git hub.

The TDS documentation is here.

If you get into a mess you can just delete the /var/ldap/ldbm directory and start again!

Getting cipher keys to another site – the basics of Exporter and Importer keys in ICSF.

I’ve spent some time (weeks) exploring ICSF with the overall mission of sending an encrypted data set between two sites. Looking back it was like the saying when you’re up to your neck in alligators, it’s hard to remember that your initial objective was to drain the swamp.

I’ve explored many parts of ICSF. One area that confused me for a while was the use of Key Encrypting Keys, or Exporter and Importer keys, also known as transport keys. I’ll explain my current thoughts on it – bearing in mind these may not be 100% accurate.

If I want to encrypt data on one system, send the encrypted data to another system, and decrypt it on that system; the sender system needs encryption key, and the receiving system needs the decryption key.

Typically the data encryption is done using a symmetric key, where the same key is used to encrypt as decrypt. You can also use asymmetric keys where you encrypt with one key, but need a different key to decrypt it.

The first challenge is how to securely get the keys onto the systems.

  • You cannot just email the symmetric key to the remote site, because bad people monitoring your email will be able to get the symmetric key.
  • You could print the key, and sent it through the mail system, courier, or carrier pigeon to the remote site. This still means that bad guys could get the key (using a telephoto lens through a window, through the security cameras, or catch the pigeon).
  • A secure way using a technique called Diffi-Hellman can be used to create the same symmetric key at each end. It uses private/public keys and an agreed seed. No sensitive data is sent between the two systems.

When setting up ICSF to use cross system, you need to set up keys for both A to B, and for B to A. You can use the same keys and seed for both, but the keys will be different.

If you are setting up several independent systems you will need keys A to B, B to A, A to C, C to A, B to C and C to B etc.

Setting up keys

You can set up a CIPHER key for encrypting data, and you can set up a MAC key for checking that what was sent is the same as what was received. (You hash the data, then encrypt the hashed value. Send the hash along with the data. If both ends do it, they should get the same answer!)

Decrypting and re-encypting

The keys are stored on disk in an encrypted format. The key for this is within the cryptographic hardware. If you want to send a key to another system, you need to encrypt it.

ICSF has a function in the hardware which says “here is some data encrypted with the hardware key, decrypt it, and re-encrypt it with this other key”. It has a matching function “here is some encrypted data, decrypt it, and re-encypt it with the hardware key”. This way the clear text of the key is never seen. To extract a key, you need to provide a key to re-encrypt it.

How to send the key to the remote system?

You have several choices depending on the level of security your enterprise has.

ICSF has a function to export a symmetric key using

  • an RSA public key. At the remote end you need the private key to be able to decrypt it. This needs an RSA key size of at least 2048.
  • an exporter key. At the remote end you need the matching importer key to decrypt it. Under the covers this is a symmetric (AES) key. The AES technique is faster, and “stronger” than RSA (it takes longer to break it). An AES 256 key is considered stronger than an RSA 4096 key.

I think of exporter and importer keys as a generalised public key and private key. The concept may be the same, but the implementation is very different.

The question “should I use RSA or Exporter/Importer” comes down to how secure do you want to be. If you export a key once a week the costs are small.

These keys used to encrypt keys, are known as key encrypting keys, and you often see KEK in the documentation.

What are the advantages of using an exporter and an importer key?

If you are using a symmetric key to encrypt your data, why is there exporter and importer keys, and not just one key?

If a purely symmetric key was used as a key encryption key, this means that if you are allowed to encrypt with it – you can decrypt with it. This then means you can decrypt other people’s data which used the same key.

By having a one key for encryption and another key for decryption you can isolate the authorities. If I have access to the exporter key, although I extract it and send it to my evil colleague at the remote end, they cannot use it to decrypt the data. If I have access to a symmetric key generated with Diffi-Hellman, the same key is used at each end, and so I could extract it and send it to my evil colleague who could use it.

Setting up my first importer or exporter key.

Use a pair of matching ECC public and private keys. The key type (eg Brain Pool) and key size must match. You use CSNBKTB2 to define a skeleton of type importer or exporter (or cipher). You then pass the skeleton into CSNDEDH along with the private and public keys. You can then use CSNBKRC2 to add it to the local CKDS.

You can do this at the remote end, just switch the public and private, and the importer to exporter. You do not need to send a key between the two systems.

Once you have defined your first exporter and importer key there is an alternative way of creating transport keys, using CSNBKGN2.

Alternative way of defining more transport keys.

There are three scenarios to consider when setting up transport keys between two systems.

  • On system A create an exporter key for system A, and create an importer key system for system B. The importer key will need to be encrypted as it need to be send to the system B. The system A (local) exporter key can then use the key to export a (cipher or MAC) key. Then write this key to a file and send it to the other system. At system B, use the importer key to import this data into the system B’s key store.
  • Do it from the “other end”. On system B create an importer key for system B, and create an exporter key for system A. This exporter key will need to be encrypted as it needs to be send to system A. Then write this key to a file and send it to the other system. At system A import it. System A can then use the new key to export a (cipher or MAC) key from that system, write it a file, and send it to system B which can import it.
  • Define them on another system. On system C create an exporter key for system A, and an importer key for system B. These both need to be encrypted (and use different keys to encrypt each one). Send the encrypted keys to system A and system B. They can import the key, and send keys from system A to System B.

You use CSNBKGN2 to do this. I’ll only cover the first two cases.

You need to specify one of

  • keytype1 = “IMPORTER”, keytype 2 = “EXPORTER” or
  • keytype1 = “EXPORTER”, keytype 2 = “IMPORTER”

Use RULE = “OPEX ” where

  • OP means store the first key in the local (OPerational) key store.
  • EX means keytype 2 is to be exported. You have to specify an AES exporter key in the field key_encrypting_key_identifier_2.

Note. To export the key you already need an AES exporter key! This means you cannot create your first transport key with this method.

The output of this function is

  1. a key which you can add to the local key store using CSNBKRC2,
  2. a key which you can write to a file and send it to the remote system.

At the remote system you use CSNDSYI2 and the Importer key to import the data and re-encrypt it with the local key. Then use CSNBKRC2 to add it to the CKDS.

Which technique is better?

I think using a pair of matching ECC public and private keys and Diffie-Hellman is simpler, as it does not involve sending a file between systems. As this activity is done infrequently it may not matter.

Using DH many not be as secure as the alternative.

How many transport keys do I need?

You could create a new transport key every week if you wanted to. It is only used to send a data key to a remote system, so transient. When you have created a cipher key to encrypt and decrypt your data you need to keep this as long as you need access to the data. Once you recreate it – you are unable to read data sets with the previous key.

Depending on your security requirements you might want to have more than one transport key for data isolation. For example test data and production data have different keys.


You could use Enterprise Key Management Foundation (EKMF) from IBM to manage your keys

EASY ICSF – making it easy to use the API to generate and export/import keys

I’ve put some code on GITHUB which has C and REXX code which have a simpler interface to ICSF. The code examples hide a lot of the complexity.

For example to generate an AES CIPHER key the high level C code is

// build the skeleton for C=CIPHER ( could be E for exporter or I for IMporter
//  It returns the skeleton and its length 
rc = skeletonAES("C",& pToken,& lToken); 
if ( rc != 0 ) return rc; 

// Generate the key - passing the skeleton and returning the Token
// input the skeleton 
// output the token 
rc = GENAES2(pToken,&lToken); 
if ( rc != 0 ) return rc; 

// Add this to the CKDS                                                        
rc = addCKDS(pKey,pToken       ,lToken,pReplace); 
if ( rc != 0 ) return rc; 

printf("GENAES %s successful\n",pKey); 
return rc; 

To export an AES key

// Pass in the name of the AES key pKey
// the name of the encryption key (AES EXPORT or PKI) pKek
// Get back the blob  of data
rc =exportAES (pKey,pKek,&pData, &lData); 
 if (rc > 0 ) return rc; 
Write the blob to a file specified by dd
 rc = writeKey("dd:TOKEN",pData,lData); 

It gives in //SYSPRINT

Exists: CSNBKRR2 read AESDHE CKDS rc 0 rs 0 No error found


KEY:AESDHE:INTERNAL SYMMETRI EXPORTER CANAES


Exists: CSNBKRR2 read PKDS2 CKDS rc 8 rs 10012 Key not found
Exists: CSNDKRR read PKDS2 PKDS rc 0 rs 0 No error found .


KEK:PKDS2:INTERNAL PKA RSAPRIV 1024MEAO


RSA ¬AES:Rule:AES PKOAEP2 SHA-256 AES AESKW AES


ExpAESK:CSNDSYX rc 8 rs 2055 The RSA public key is too small to encrypt the DES key

Where…

  • Exists: CSNBKRR2 read AESDHE CKDS rc 0 rs 0 No error found
    • It used the ICSF CSNBKRR2 to check AESDHE is in the CKDS
  • KEY:AESDHE:INTERNAL SYMMETRI EXPORTER CANAES
    • It reports some info on the key. It is a Symmetric (AES) Exporter and can do AES processing
  • Exists: CSNBKRR2 read PKDS2 CKDS rc 8 rs 10012 Key not found
    • This is ok — it looks in the CKDS first – but as this is a PKI – it will not be found
  • Exists: CSNDKRR read PKDS2 PKDS rc 0 rs 0 No error found .
    • It is found in the PKDS
  • KEK:PKDS2:INTERNAL PKA RSAPRIV 1024MEAO
    • This gives info about the Key Encryption Key. It is RSA and has a private key. The key size is 1024
  • RSA ¬AES:Rule:AES PKOAEP2 SHA-256 AES AESKW AES
    • This is the rule used
  • ExpAESK:CSNDSYX rc 8 rs 2055 The RSA public key is too small to encrypt the DES key
    • The size of the PKI key was too small.
    • As well as giving the return code and reason code, it gives the reason for some of the reason codes.
    • When I repeated this with a RSA key with a large enough key – it worked successfully.

There are also some macros such as

  • isRSAPRIV… is this token (blob of data) an RSA private key?
  • isEXPORTER … has this token been defined as an EXPORTER key?

I use these to check keys being used for operations to check that keys are valid for the ICSF operations.

The git hub code is work in progress. As I find problems I’m fixing them, but overall it should you what you can do with it.

z/OSMF autostart: how to stop it, and how to use it (or not)

I upgraded my z/OS from ADCD Z24A to ADCD Z24C. This has updates to lots of the software, including z/OSMF. This includes some performance fixes, so z/OSMF start up is much quicker and uses much less CPU. However the newer level of ADCD Z24C now starts z/OSMF automatically. It took a few attempts to stop this.

When z/OS starts, it takes configuration parameters from IEASYSxx. You can see which IEASYSxx you are using with the DISPLAY IPLINFO operator command. You can see which IZU parameter you are using with

d iplinfo,izu
IEE255I SYSTEM PARAMETER ‘IZU’: AS

With the DISPLAY PARMLIB command, you get the parmlib concatenation

D PARMLIB
IEE251I 08.34.02 PARMLIB DISPLAY
PARMLIB DATA SETS SPECIFIED AT IPL
ENTRY FLAGS VOLUME DATA SET
    1   S   C4CFG1 USER.Z24C.PARMLIB
    2   S   C4CFG1 FEU.Z24C.PARMLIB
    3   S   C4SYS1 ADCD.Z24C.PARMLIB
    4   S   C4RES1 SYS1.PARMLIB

Where the ‘S’ means it came from a LOADxx parameter. A ‘D’ means Default SYS1.PARMLIB.

Look in each data set in turn for the IZUPRMxx member (xx=AS in my case).

Contents of the IZUPRMxx member

Within the member is SERVER_PROC(‘IZUSVR1’) This tells the IPL code which server to start.

Within the member is line with AUTOSTART(…). The value can be

  • CONNECT – I think of this as AUTOSTART(NO)
  • LOCAL – I think of this as AUTOSTART(MAYBE)

See here for a discussion.

It is a bit more complex than YES|NO. It has capability to allow one of a group of z/OSMF servers to start.

zwe init certificate -c ./zowe.yaml –security-dry-run

If you have AUTOSTART(CONNECT) specify AUTOSTART_GROUP(NONE).

If you have AUTOSTART(LOCAL) and AUTOSTART_GROUP(COLIN) for more than one IZU servers. Then at IPL it checks to see if a Z/OSM server with AUTOSTART(LOCAL) and AUTOSTART_GROUP(COLIN) is already active. If so – the instance does not start.

The documentation says it checks by having an ENQ on the file system with the AUTOSTART_GROUP value. This implies you need the z/OSMF data directories to be on the same ZFS file system.

Should I use autostart?

This is a tough question. I cannot test it because I only have one LPAR, but I have some thoughts.

Single LPAR, single Z/OSMF instance

This is relatively easy. You can start z/OSMF automatically though commands at IPL, or you can use the z/OSMF IZUPRMxx method, or start it manually.

Multiple LPARs in a sysplex, single Z/OSMF instance.

If you have a shared file system, you can start the z/OSMF instance on any LPAR. If you start the instance more than once, it detects this and will only allow one instance to be active.

You have to plan to be able to starting an instance on different systems. For example the IP address and port for the base system will be different. You’ll need to set up a TCP/IP environment to support this. See HA Liberty web server – introduction to using VIPA to provide high availability connectivity and the z/OSMF documentation

Multiple LPARs in a sysplex, multiple z/OSMF instances.

This is where the autostart may be useful. The first LPAR to be started will start the z/OSMF instance. When other LPARs start, they detect that another z/OSMF in the group is active, and will not start the z/OSMF instance. As with starting a single z/OSMF instance in a multi LPAR environment, you need to plan the connectivity. See HA Liberty web server – introduction to using VIPA to provide high availability connectivity and the z/OSMF documentation.

I struggle to see why starting just one instance is useful. For availability I would want more than once instances to be running at the same time. With only one instance. If you stop it, and restart on a different LPAR, you have a period of a minute or more where you do not have z/OSMF running.

I would have a group_token, so each instance can register the “group name” is active. An application can then ask to be notified when a member of the group becomes active, using standard z/OS services.

Stateless z/OSMF instances

If you are using z/OSMF facilities which save state, the autostart of just one server will not work. For example if you are using any workflow facilities, state is saved in the file system. You need to logon to the same instance to be able to continue working on the workflow. If today you run on LPARA’s z/OSMF and tomorrow you run on LPARB’s z/OSMF you cannot do your workflow.

You need to plan your z/OSMF usage and plan to have “stateless” z/OSMF servers which can use AUTOSTART; and workflow servers – for which you have only one instance (which can be moved around) and do not use autostart.

How do I put today’s date in JCL?

I have a backup job which I run to take current copy of the file and save it with today’s date. For months I’ve been editing it to change today’s date. 10 minutes of browsing the internet showed me how easy it was!

For example

//MYLIBS1 JCLLIB ORDER=USER.Z24C.PROCLIB
// SET TODAY=D&YYMMDD
//S1 EXEC PROC=BACKUP,P=USER.Z24C.PARMLIB,DD=&TODAY.

The procedure has

//BACKUP PROC P='USER.Z24C.PROCLIB',DD='UNKNOWN'
//S1 EXEC PGM=IKJEFT01,REGION=0M,
// PARM='XMIT A.A DSN(''&P'') OUTDSN(''BACKUP.&DD..&P'')'
//SYSPRINT DD SYSOUT=*
//SYSTSPRT DD SYSOUT=*
//SYSTSIN DD *
// PEND

which gave me

IEFC653I SUBSTITUTION JCL – PGM=IKJEFT01,REGION=0M,PARM=’XMIT A.A DSN(”USER.Z24C.PARMLIB”) OUTDSN(”BACKUP.D210906.USER.Z24C.PARMLIB”)’

I could have used (see here for a complete list)

  • &LYYMMDD for local date
  • &HHMMSS for time
  • &LHHMMSS for local time

Easy – I should have done this years ago!