Using LDAP with MQ multi platform.

MQ multiplatform can use LDAP as a userid and group repository, so you can logon to any machine where MQ is running, and use your corporate userid and password.

I’ve logged on to MQ on my Linux machine, and used my z/OS userid and password. It was pretty easy to set up (I had prior experience of using LDAP) but although it didn’t quite behave as I thought it would – I thought this was pretty clever.

Once you have installed LDAP, started it, and created your directory structure, (user, groups) and access permissions, you can start to use it. I’ve documented some of the initial settup here. It covers some of the concepts referred to below.

I used LDAP (Tivoli Directory Server) on z/OS as my LDAP server.

Contents

I’ve also written using LDAP with MQ and nested groups (MQ NESTGRP).

Using LDAP from MQ Multiplatform

The IBM documentation for this is so-so. It gives examples, but the examples didn’t work for me, but they were enough go point me in the right direction.

Start here.

I created a LDAP.MQSC file with

DEFINE AUTHINFO(MYLDAP) +
AUTHTYPE(IDPWLDAP) +
CONNAME(‘10.1.1.2(389)’) +
AUTHORMD(SEARCHGRP) +
BASEDNG(‘o=Your Company’) +
BASEDNU(‘o=Your Company’) +
LDAPUSER(‘cn=adcda, o=Your Company’) +
LDAPPWD(‘adcdapw1’) +
SECCOMM(NO) +
CLASSUSR(‘ibm-nativeAuthentication’) +
CLASSGRP(‘groupOfNames’) +
GRPFIELD(sn) +
SHORTUSR(sn) +
REPLACE

ALTER QMGR CONNAUTH(MYLDAP)

REFRESH SECURITY TYPE (CONNAUTH)
* ALTER QMGR CONNAUTH(SYSTEM.DEFAULT.AUTHINFO.IDPWOS)

Where the key fields for connecting to LDAP are

  • conname – the IP address of the LDAP server.
  • ldapuser and ldappwd – userid and password to access LDAP.
  • seccomm – use TLS/SSL to contact the LDAP server. I used “no” while setting this up.

the key fields for identifying users are

  • basednu – the subtree to be used for userids, for example all users are one level under ou=user,o=myorg.
  • classusr – is the objectclass attribute to identify the userid. The default is inetOrgPerson.
  • shortusr – the dn identifiers are too long. MQ needs IDs with 12 characters or less. This attribute says which attribute to use.

the key fields for identifying which groups an id belongs to

  • authormd – how to search for the authorisation.
  • basedng – the subtree to be used for groups, for example ou=group,o=myorg.
  • classgrp – the objectType which objects must have to be recognised as a group
  • grpfield – the simple name of the group
  • findgrp – what to filter for. For example ‘member’

Being a careful person, I started an interactive runmqsc session in one terminal, and used runmqsc … < LDAP.MQSC in another window. This way if there were problem I could use the interactive session to reset the QMGR CONNAUTH (as in the comment above). I know that the userid that started the queue manager, does not need a password; so if you issued strmqm qma you can use runmqsc qma without a userid and password. It gets harder if your id is not the id the queue manager is running under.

LDAP definition of my logon userid

The userid I wanted to use with MQ was defined

dn: cn=ibmuser, o=Your Company
objectclass: top
objectclass: person
objectclass: organizationalPerson
objectclass: ibm-nativeAuthentication
objectclass: inetOrgPerson
cn: ibmuser
sn: ibmuser
ou: test
st: cn=group,ou=groups,o=your Company
st: cn=mqadmin,ou=groups,o=your Company
ibm-nativeId: ibmuser

The authinfo data has

BASEDNU(‘o=Your Company‘) +
CLASSUSR(‘ibm-nativeAuthentication‘) +
SHORTUSR(sn) +

When I try to logon with userid ibmuser, MQ issues an LDAP query for the record with

  • sn=ibmuser
  • with an object class = ‘ibm-nativeAuthentication (which provides the RACF support for userid and password)
  • in the subtree o=Your Company

Check the LDAP configuration when things go wrong

It took me a few hours to determine why I could logon with one id, but not another id. Some LDAP entries worked, and some did not. It turned out to be an Access Control List (ACL) set up problem, where the LDAPUSER userid was not authorised to see some of the records. With the above AUTHINFO object, the query that MQ uses to check authorisation is like

ldapsearch -h 127.0.0.1 -D “cn=adcda, o=Your Company” -w ? -b “o=Your Company” “&(objectClass=ibm-nativeAuthentication)(sn=zadcdc)”

Where the parameters match up with the authinfo object above, and zadcdc is the userid trying to logon.

If you get no data back, get an authorised person (cn=ibmuser…) to issue the command for the user problem userid zadcdc:

ldapsearch -h 127.0.0.1 -D “cn=ibmuser, o=Your Company” -w ? -b “o=Your Company” “&(objectClass=*)(sn=zadcdc)”aclentry aclsource

The aclentry will give you the userids or groups who are authorised to use the entry, and the access they have.

The aclsource tells you which node in the tree the ACL was inherited from (for example aclsource=o=Your Company says it came from the root node). I had set up an ACL for my zadcdc which did not include my LDAPUSER.

Setting up MQ connect authorities

You can issue the command

setmqaut -m qml -t qmgr -p ibmuser +connect

to give ibmuser connect authority.

You can use LDAP groups for example

setmqaut -m qml -t qmgr -g “cn=mqadmin,ou=groups,o=your Company” +connect.

How do you set up a group in LDAP?

This is where it gets interesting. You can define a static group with its list of members, or create a dynamic group which is more flexible and “modern” (where modern is within the last 30 years).

Using a static group (with a list of members defined in it)

You can define a static group in LDAP using

dn: cn=mqstatic,ou=groups,o=your Company
objectclass: groupOfNames
ou:groups.
member: cn=ibmuser,o=your Company
member: cn=adcdb,o=your Company

It has two members.

When the userid authenticates, the queue manager asks LDAP for the groups that the userid is in; using the AUTHINFO definitions CLASSGRP(‘groupOfNames’) + GRPFIELD(…) FINDGRP(‘…’) a query is done for the groups which have the userid id. For example with

BASEDNG(‘ou=groups,o=your Company’) +
CLASSGRP(‘groupOfNames’) +
FINDGRP(‘member’) +

and cn=ibmuser o=your Company. The query is

(&(objectClass=groupOfNames)(member=cn=ibmuser, o=Your Company) in subtree (ou=groups,o=your Company)

The ldap search asking to return the member attribute

ldapsearch … -b “ou=groups,o=your Company” “(&(objectClass=groupOfNames) (member=cn=ibmuser, o=Your Company)) ” cn

gave two group names –

cn=mqstatic2,ou=groups,o=your Company
cn=mqstatic,ou=groups,o=your Company

This gives the information as to which groups a user is in. MQ then saves the userid and group information, and does not need to go to LDAP the next time the userid needs access checking.

Knowing which groups a userid is in, MQ can then decide on the access by comparing with the setmqaut definitions.

Using a dynamic group – or using a group attributer in the user record.

Instead of a group having a list of members, you can add information to the user’s record.

For example

dn: cn=adcda, o=Your Company
changetype: add
objectclass: top
objectclass: person
objectclass: organizationalPerson
objectclass: ibm-nativeAuthentication
sn: adcda
group: cn=group,o=your Company
cn: mq
ibm-nativeId: adcda

Unfortunately there is no “group” attribute defined in the LDAP schema, so I had to find another attribute to use. I used st for state. I used st: cn=group,o=your Company instead of group: cn=group,o=your company.

In my MQ AUTHINFO definition I had

BASEDNU(‘o=Your Company’) +
CLASSUSR(‘ibm-nativeAuthentication’) +
SHORTUSR(sn) +
AUTHORMD(SEARCHUSR) +
FINDGRP(st,‘) +
BASEDNG(‘ou=zzzzz,o=your Company’) +

MQ did an LDAP search using this information

ldapsearch… -b “o=Your Company” “&(objectClass=(ibm-nativeAuthentication)
(sn=ibmuser))” st

Which is just a display of the userid information – and return the fields with the attribute you specified(st). It returned

cn=ibmuser, o=Your Company
st=cn=group,ou=groups,o=your Company
st=cn=mqadmin,ou=groups,o=your Company

This tells MQ to use the groups

cn=group,ou=groups,o=your Company and cn=mqadmin,ou=groups,o=your Company.

You can use the setmqaut command to give the group access

setmqaut -m qml -t qmgr -g “cn=mqadmin,ou=groups,o=your Company” +connect

Once this was done, the cn=ibmuser could connect to MQ using groups.

Can I use LDAP to hold my setauth information?

Only the group and userid information are held in LDAP, all the other information is held in the queue manager. You cannot use LDAP to have your setmqaut configuration in LDAP, and shared by multiple queue managers. You still have to use setmqaut to set up each queue manager access.

Giving userids access to MQ objects

You can use the setmqaut command or the set authrec runmqsc command to give principals or groups access to resources.

For example

setmqaut -m qml -n CP0000 -t queue -g “cn=mqstatic,ou=groups,o=your Company” +inq

I’ve changed the definitions in LDAP – when will they get picked up?

Changes get picked up when

  • the queue manager is restarted
  • when the resfresh security, refresh security type(authserv) or the refresh security type(connauth) command is issued.

I’ve started LDAP now what do I do?

Contents

What is LDAP?

I found this a good introduction on LDAP; the structure of the data, searching and filters.

I’ve written up

Setting up LDAP on z/OS

I created a new LDAP instance on z/OS see getting started with LDAP on z/OS, and the definitions and JCL I used to create LDAP. I used the standard schema /usr/lpp/ldap/etc/schema.user.ldif and the IBM extensions /usr/lpp/ldap/etc/schema.IBM.ldif which give you the attributes for working with RACF etc.

Now what do I do?

It is much easier to set up your LDAP structure properly, before you start adding in lots of records, rather than try to change the structure it once you have populated it with all your data. You could be agile, develop your LDAP data, back up the data, and “just” recreate the LDAP repository once you know what you want. Where “just” means write Python or Rexx scripts to take the LDAP data and convert to the new format, for example adding additional information to every definition before adding it to the dictionary.

Because I did not fully understand how Access Control Lists work, I managed to make some of my data invisible to the end user requests, so it is easy to make mistakes when you do not know what you are doing. This blog should give you some hints about setting up your LDAP environment, and avoid some of the rework.

Background to LDAP

LDAP is a generalised directory with an application interface over IP.

The data is held in a hierarchical(upside down tree) form, for example the top of the tree may be called o=myorg. Where o stands for Organisation.

You configure this top of the tree in the LDAP config file for example

# this defines a file based database
database LDBM GLDBLD31/GLDBLD64
# this says it can use RACF for password checking
useNativeAuth all
#this is the top of the tree
suffix “o=myorg
# this is the location on disk of the database
databaseDirectory /var/ldap/ldbm

The next levels down might be

  1. ou=users, o=myorg
  2. ou=groups,o=myorg
  3. ou=corporate data,o=myorg

The data for the first subtree could be stored in DB2, the data for the second subtree could be in files, and the data for the third subtree could be in another LDAP, somewhere else.

A “record” or leaf of the tree could be identified by

dn:cn=colin paice,c=GB,ou=users,o=myorg

Where

  • cn= is the common name
  • c= is the country name
  • ou= is the organisational unit
  • o= your organisation.

With each record you need one or more objectTypes. Object types have attributes. For example an objectType of person can have an attribute telephoneNumber. If you want to use telephoneNumber you need an objectType that supports it.

A typical entry might be

dn: cn=mq, o=Your Company
objectclass: top
objectclass: organizationalPerson
cn: mqadmin
telephoneNumber: 1234567
telephoneNumber: 987654321
sn: mqadmin

Logging on with z/OS userid and password.

I set up LDAP so I could logon to the Linux queue manager, and use my z/OS userid and password. For this I had

dn: cn=ibmuser, o=Your Company
objectclass: top
objectclass: person
objectclass: organizationalPerson
objectclass: ibm-nativeAuthentication
cn: ibmuser
sn: snibmuser
ibm-nativeId: IBMUSER

Where

  • objectclass: ibm-nativeAuthentication is for the RACF authorisation
  • ibm-nativeId: IBMUSER says use the RACF userid IBMUSER when cn=ibmuser, o=Your Company is used.

I also set up an Access Control List entry for this userid, so it can search, and read entries

dn: o=Your Company
changetype: modify
aclEntry : access-id:cn=ibmuser, o=Your Company:normal:grant:rscw

This says

  • for the subtree under the distinguished name of o=Your Company. DNs of ou=groups,o=myorg, and ou=users,o=myorg would be more typical subtree names.
  • the dn cn=ibmuser, o=Your Company, the dn of the user for this ACL. This would normally be a group rather than a userid. You can have multiple entries for each dn.
  • has read, search and compare and write on normal fields. A social security number is “sensitive” field, and a password is a “critical” field. This ACL only gives access to normal fields.

What do you want in a year’s time?

It is much easier to set up your LDAP structure properly before you start, rather than try to change the structure it once you are using it.

For example you could have a flat tree with entries like

  • dn:cn=colin paice,o=myorg for users
  • dn:cn=mqadmin,o=myorg for groups

This will quickly become hard to manage. You may find the following are better.

  • dn:cn=colin paice,ou=users,o=myorg for users
  • dn:cn=mqadmin,ou=groups,o=myorg for groups
  • dn:cn=testmqadmin,ou=groups,o=myorg for a different group with the department name in it.

or

  • dn:cn=colin paice,c=gb,ou=users,o=myorg for users by country
  • dn:cn=mqadmin,ou=test,ou=groups,o=myorg for test groups

Administration

You can give access to administer nodes in the tree, at the subtree level. For example for the sub node in the tree ou=test,ou=groups,o=myorg might have administrators.

  • cn=hqadmin,ou=groups,o=myorg
  • cn=colin paice,c=gb,ou=users,o=myorg

Having a structure like dn:cn=mqadmin,ou=test,ou=groups,o=myorg means you can give the test manager admin control to this test groups, but the test manager has no authority over the production groups. If you had a structure like dn:cn=mqadmin,ou=groups,o=myorg. It makes it much harder to separate the responsibilities.

For the top of the tree o=myorg, you could set up only the following group has administration.

  • cn=hqadmin,ou=groups,o=myorg

From a performance perspective it will be cheaper and faster to access data in a subtree, rather than search the whole tree – bearing in mind you could have millions of entries in the tree.

Note: The adminDN userid in the LDAP config file has authority over every thing. The ACLs on the tree, or subtree, or record define who has administration authority.

Controlling access

You control access using Access Control Lists. An ACL looks like

dn: ou=users,o=myorg
changetype: modify
replace: aclEntry
aclEntry : access-id:cn=ibmuser, o=Your Company:
  object:ad:normal:grant:rscw:sensitive:rscw:
  critical:rscw:restricted:rscw
aclEntry: group:cn=authenticated:normal:rsc

This defines the access for the subtree ou=users,o=myorg

  • cn=ibmuser, o=Your Company can add delete entries under the subtree; and use any of the fields including sensitive ( eg social security number) and critical (eg password).
  • group: cn=authenticated any user who has authenticated can read, search and compare on normal fields. They cannot see or select on sensitive or critical fields.
  • object:a|d is to add or delete objects in the subtree.
  • normal:… sensitive… critical… these give access to the fields in the data ( so you can issue an ldap_search for example). You can specify <grant:|deny:> attributes, and so give access or remove access
  • restricted: To update ACLs you must have read and write access.
  • You can specify what access people have to individual fields, so you can give them access to most fields, but deny access to specific fields.

The userid defined in the LDAP configuration file under adminDN can change anything. When I messed up my data, I changed the config file to use adminDN cn=admin and password secret1, stopped and restarted the LDAP server, and fixed my problem. I undid the changes to the config file, stopped and restarted the server, to get back to normal operation.

You need to plan on what access you want, at which levels of the tree, and who has what access to which fields.

If is better to give access to groups, and add users to groups, than to give access to userids. For example, if you have many ACLs, if someone joins the team, changing the group to add the id is one change. If you have to change each ACL, and add the id, you may have many changes.

What is going to use it?

Programs using LDAP may have requirements on the data structure. For example MQ can use userid and group information in LDAP. It expects the data to be under a single subtree. You could specify groups are to be found under, ou=test,ou=groups,o=myorg. You cannot say under ou=test,ou=groups,o=myorg and ou=production,ou=groups,o=myorg. Similary, users would be under ou=users,o=myorg. If you specified the subtree c=gb,ou=users,o=myorg, then this limits users to having a GB userid, which may not be what you want.

Using groups

Although LDAP on z/OS can support static groups (with a list of members), nested groups containing groups, and dynamic groups ( where you can say those users with ou=production), the group of groups and dynamic groups cannot be used to check group membership.

You can query and display all users in all group types.

You can say give me the only the group names, where colinPaice is a member; when the group is a static group, not groups of groups, nor dynamic groups. This makes managing groups much harder, and you may need to have bigger groups than specific smaller groups.

You cannot exploit the flexibility of the nested and dynamics groups.

What fields do you want in the records.

If you have a definition like

dn: cn=ibmuser, o=Your Company
objectclass: top
objectclass: person
objectclass: organizationalPerson
objectclass: ibm-nativeAuthentication
cn: ibmuser
sn: snibmuser
ibm-nativeId: IBMUSER

If you want to use the ibm-nativeId field to give the RACF userid to use, then you need objectclass: ibm-nativeAuthentication.

If you have ibm-nativeAuthentication you must have ibm-nativeId, and may have other fields.

The “must” and “may” fields are defined in a schema.

The schema /usr/lpp/ldap/etc/schema.IBM.ldif has

objectclasses: (
  NAME 'ibm-nativeAuthentication'
  DESC 'Indicates native security manager should be used during authentication.'
  SUP top
  AUXILIARY
  MUST ( ibm-nativeId )
)

The objectClass: person has

objectclasses: (
  NAME 'person'
  DESC 'Defines entries that generically represent people.'
  MUST ( cn $ sn )
  MAY ( userPassword $ telephoneNumber $ seeAlso $ description )
)

This means you must provide a cn and an sn entry, and you can provide other entries

The schema can give information about an attribute

attributetypes: (
  NAME ( 'sn' 'surName' )
  DESC 'This is the X.500 surname attribute, which contains the family name of a person.'
  EQUALITY caseIgnoreMatch
  ORDERING caseIgnoreOrderingMatch
  SUBSTR caseIgnoreSubstringsMatch
)

This shows that for ordering or comparing, it ignores the case of the data, so “colin paice” is the same as “COLIN PAICE”.

You need to decides what fields you want, and how you want the fields to be processed, for example case, and comparison.

You can write your own schema for fields that are unique to your organisation.

What you use the field for is up to you. For example “sn” could be surname, or”short name”. You just have to be consistent and document it.

What tools are there to help me?

You can use the tools provided with LDAP to administer LDAP. For example the ldap_modify command can be used to process a batch of definitions; whole records, or attributes within records.

Eclipse has a plugin Apache Directory Studio. It works well, and looks like it is highly recommended. This plugin allows you to browse and manage entries. I could not get it to display the schema.

How do I backup/export the data

You can use the ds2ldif command. It creates a file in ldif format which can be used to add back all the records (using the ldap_add or ldap_modify commands).

Using groups for authority and access checking is so last century.

I’ve been exploring LDAP as a userid repository (as can be used by MQ multi platform). This got me into an interesting rabbit warren of Role Based Access Control (RBAC) and Attribute Based Access Control(ABAC), and how you set up your repository to hold userid and access information.

In LDAP on z/OS I can set up a user

dn: cn=adcda, o=Your Company
objectclass: top
objectclass: person
objectclass: organizationalPerson
objectclass: ibm-nativeAuthentication
sn:adcda
cn: mqadmin
ibm-nativeId: adcda

With this, if I try to logon with cn=adcda, o=Your Company . It will try to use RACF to check the password I specified and the userid in ibm-nativeId (adcda) is valid. I’ve logged on to MQ on Linux using this definition, and had RACF on z/OS check my password. (I thought this was pretty neat).

This definition has an attribute sn ( surName) of adcda and a cn (commonName) of mqadmin.

LDAP groups.

You can set up statics LDAP groups

These have a list of members

dn: cn=ldap_team_static,o=myorg
objectclass: groupOfNames
cn: ldap_team_static
member: cn=colin,o=myorg
member: cn=colin2,o=myorg

Groups within groups

A group can have a group name to be included

dn: cn=ldap_team_nested,o=myorg
objectclass: container
objectclass: ibm-nestedGroup
cn: ldap_team_nested
ibm-memberGroup: cn=ldap_team_static,o=myorg
ibm-memberGroup: cn=mq_team,o=myorg

You can display group members

ldapsearch …–b “ldap_team_nested,o=myorg” “objectclass=*” ibmallMembers

You can also have smart, dynamic groups

This is the “new” way of doing it – which has been available for about 30 years.

dn: cn=dynamic_team,o=Your Company
objectclass: groupOfUrls
cn: dynamic_team
memberurl: ldap:///o=Your Company?sub?(cn=mqadmin)

This says

  • query the tree under o=Your Company ( a more realistic subtree would be ou=users,o=Your Company)
  • sub, says all levels in the tree (base is search just the specified URL, one is search just one level below the specified URL)
  • list all those with the specified attribute cn = mqadmin.

Instead of updating a group – you add information into the user’s entry, and it would get picked up automatically.

Ideally there would be an LDAP attribute “role” which you could use for this. The default schemas do not have this.

RACF

If you are using RACF you can set up a userid, and connect it to a group. RACF does not support nested groups for authority and access checking.

Access to resources

Using group or Access Control List

Many systems provides group or Access Control List (ACL) to control access to a resource.

For example you might say users in group MQADMIN can update dataset MQ.JCL, and userids in group MQOTHER can read the MQ.JCL dataset.

This has limitations in that the resources are treated individually, so if you have 10 files, you have to grant a group access to 10 profiles.

Role Based Access Control(RBAC)

I struggled initially to see the difference between RBAC and group or ACLs.

With RBAC you do not give update or read access to a resource, you give access to a “task” or “role” like “Maintain records”, “client admin”, or “clerk”. You then give the tasks the appropriate access. You could implement this at a basic level using groups called MAINTREC, and CLIENTADM to give it update access to the resource, and group CLERK with a read access.

Attribute Based Access Control(ABAC)

ABAC seems to take this further. There are products you buy which can do this for you, but I could not see how it was configured or how it worked. Below is my interpretation of how I might configure it using LDAP.

You could have a user defined like

dn: cn=colin paice, o=Your Company
role: doctor
sn:adcda
cn: mqadmin
ibm-nativeId: adcda

A set of resources like

dn: cn=doctor update,o=Your Company
table: HOSPITAL.PATIENT
table:HOSPITAL.XRAYS.DATA
MQQueue:Surgery.queue

This is a list of resources a doctor needs to do their job.

And a set of rules

dn:cn=doctorsRules,o=Your Company
role: doctor
resource: cn=doctor update,o=Your Company
site:London
site:Glasgow
reason:Patient Update

If someone (Doctor Colin Paice) wants up update a patients record, you can do a query using dynamic groups

  1. What roles does Colin Paice have?
  2. What is the resource group for the DB2 table HOSPITAL
  3. Is there a valid rule for the list of roles for Colin Paice, with the resource group for the table HOSPITAL.PATIENTS, where access is from LONDON, and reason is Patient Update.

Is it that simple?

You have to be able to handle the case when a doctor may only look at the patients notes if the doctor is the “attending physician” – so the person is a patient of the doctor. This might mean a “patientOf: Doctors_name” field in the user’s record.

It looks like you have to be very careful in setting this environment up, as you could have many thousands of rules, and it could be very hard to manage.

Even if it is hard, I think the idea of virtual groups, were you select records based on a criteria is a good idea. It may be faster than using groups because it can exploit the index capability of the underlying database, rather than build lists of group membership.

One minute MVS: LDAP defining resources

Having set up an LDAP server, you need to add information to the directory. This is not very well described in the TDS documentation.

Basic overview of data

To add information about a user use a file in USS like colin.ldif

dn: cn=colin, o=Your Company
objectclass: top
objectclass: person
objectclass: organizationalPerson
cn: LDAP Administrator
sn: Administrator

Where

  • the “key” to identify an entry is the dn ..
  • objectclass is the sort of object, and what attributes it can have. It can have many object classes
  • cn: and sn: are attribute values
  • There is a blank line following to indicate end of definition. You can have many of these in a file, to allow you to do a bulk update.

How do I display the contents?

You need to issue a query. This comes in two parts, identifying the user, and the request.

To identify requestor you need something like

ldapsearch -h 127.0.0.1 -D “cn=ibmuser, o=Your Company” -w ? …

and the query for example, to list all the information about cn=ibmuser, o=Your Company add the following to the ldapsearch request above

-b “cn=ibmuser, o=Your Company” “(objectclass=*)”

This gives

cn=ibmuser, o=Your Company
objectclass=top
objectclass=person
objectclass=organizationalPerson
objectclass=ibm-nativeAuthentication
cn=ibmuser
sn=Administrator
ibm-nativeid=IBMUSER

For all information under o=Your Company

-b “o=Your Company” “(objectclass=*)”

For only the list of sn for all users

-b “o=Your Company” “(objectclass=*)” sn

This gives

o=Your Company

cn=colinw, o=Your Company
sn=Administrator

cn=colin, o=Your Company
sn=Administrator

cn=LDAP Administrator, o=Your Company
sn=Administrator

cn=ibmuser, o=Your Company
sn=Administrator

What authority do I need?

Typically you need to be an LDAP administrator, or have the appropriate access to Access Control lists. See here for managing ACLs.

How do I add information?

If I want to add a userid definition for ibmuser (above) so I can login with RACF, I need to add attribute

ibm-nativeId: COLIN

This attribute is in object type

objectclass: ibm-nativeAuthentication

So to be able to specify the ibm-native-ID: attribute, you need to tell specify the object class as well.

My definition is now

dn: cn=colin, o=Your Company
objectclass: top
objectclass: person
objectclass: organizationalPerson
objectclass: ibm-nativeAuthentication
cn: LDAP Administrator
sn: Administrator
ibm-nativeId: COLIN

Add it to the directory

You can add this to the directory using

ldapmodify -a -h … -p … -D “…” -w … -f colin.ldif

Where

  • -a says add (instead of modify)
  • -f colin.ldif is the name of the file with the statements in it.

Modifying an entry

If you want to modify an existing entry, you can change the whole entry, or parts of it.

To add an entry

dn: cn=colin, o=Your Company
changetype: add
objectclass: top

To delete a whole entry

dn: cn=colin, o=Your Company
changetype: delete

To add an attribute to an entry

dn: cn=colin, o=Your Company
changetype: modify
add: attrccp
attrccp: value1
attrccp: value2…

This adds two attrccp values to the definition

To modify an existing attribute

dn: cn=colin, o=Your Company
changetype: modify
modify: ibm-nativeId
ibm-nativeId: PAICE

To delete an attribute

dn: cn=colin, o=Your Company
changetype: delete
delete: ibm-nativeId

This deletes all ibm-nativeID attributes.

If you want to delete a specific attribute specify it after the delete: line

dn: cn=colin, o=Your Company
changetype: delete
delete: attrccp
attrccp: value2

One minute MVS. Getting started with LDAP on ADCD.

LDAP is a standard protocol for accessing directory information over TCP/IP. For example the command

ldapsearch -h 127.0.0.1 -D “cn=Admin, o=Your Company” -w secret -b “o=Your Company” “(objectclass=*)” aclEntry

This sends a request to IP address 127.0.0.1 with userid cn=… and password “secret”, for information under the subtree of “o=Your Company” and requests is sends back information on any ACL entries.

z/OS implementation

LDAP on z/OS is also know as Tivoli Directory Server.

It can run with different backend databased from DB2 to a files in a USS directory. It can interface to RACF so you can use query userid and group information from RACF through LDAP.

Schemas.

You need to configure a schema of what fields there are, and the relationship. For example, for an organisation telephone directory you might have

dn: cn=LDAP Administrator, o=Your Company
objectclass: organizationalPerson
cn: LDAP Administrator
sn: Administrator
userPassword: ********
phoneNumber:1234567

Where

  • dn: cn=LDAP Administrator, o=Your Company This is the internal name of the object, and what part of the data tree it belongs to “o=Your Company
  • objectclass: organizationalPerson defines the object type
  • cn: LDAP Administrator This is the common name ( nick name) of the object
  • sn: Administrator This is the surname of the person
  • userPassword: ******** This is the user’s password. It has been defined that the value is not displayed
  • phoneNumber:1234567 This has been defined so that is can only take numbers and ‘-‘.

You can define your own attributes and properties. You just need to update the schema.

Which database is used?

A sample LDAP configuration might contain

database LDBM GLDBLD31/GLDBLD64
suffix “o=Your Company”
databaseDirectory /var/ldap/ldbm

  • database LDBM says there is a database in a USS directory
  • GLDBLD31/GLDBLD64 are the names of the interface routines to use.
  • suffix “o=Your Company” is the root of the subtree in this database
  • databaseDirectory /var/ldap/ldbm is the name of the USS directory

You can configure LDAP to say for these names(o=someoneElsesCompany) go to another LDAP with at this address.

If I use a query like ldapsearch -h 127.0.0.1 -D “cn=LDAP Administrator, o=Your Company” -w secret -b “o=Your Company” “(objectclass=*)” aclEntry…. the -D cn=LDAP Administrator, o=Your Company” says look for a userid with the given data in the o=Your Company subtree. With the above definitions it would look in the the USS file system under /var/ldap/ldbm, for a userid cn=LDAP Administrator, o=Your Company.

Configuring an LDAP server on ADCD.

ADCD is a preconfigured system which can on on zPDT and ZD&T. These provide a system 390 emulator. This system comes with a lot of software installed, and some subsystems such as z/OS, MQ, DB2, IMS, CICS and z/OSMF pre configured.

The software for LDAP(Tivoli Directory Server) is installed but not configured. The documentation is extensive, and the configuration file is very large (with lots of comments). You run a configuration script which produces some files.

However for a simple configuration you only need a few files to run.

Some of these files do not work – for example they try to define a userid with an existing Unix uid.

I’ve taken the updated files and put them on git hub.

The TDS documentation is here.

If you get into a mess you can just delete the /var/ldap/ldbm directory and start again!

Getting cipher keys to another site – the basics of Exporter and Importer keys in ICSF.

I’ve spent some time (weeks) exploring ICSF with the overall mission of sending an encrypted data set between two sites. Looking back it was like the saying when you’re up to your neck in alligators, it’s hard to remember that your initial objective was to drain the swamp.

I’ve explored many parts of ICSF. One area that confused me for a while was the use of Key Encrypting Keys, or Exporter and Importer keys, also known as transport keys. I’ll explain my current thoughts on it – bearing in mind these may not be 100% accurate.

If I want to encrypt data on one system, send the encrypted data to another system, and decrypt it on that system; the sender system needs encryption key, and the receiving system needs the decryption key.

Typically the data encryption is done using a symmetric key, where the same key is used to encrypt as decrypt. You can also use asymmetric keys where you encrypt with one key, but need a different key to decrypt it.

The first challenge is how to securely get the keys onto the systems.

  • You cannot just email the symmetric key to the remote site, because bad people monitoring your email will be able to get the symmetric key.
  • You could print the key, and sent it through the mail system, courier, or carrier pigeon to the remote site. This still means that bad guys could get the key (using a telephoto lens through a window, through the security cameras, or catch the pigeon).
  • A secure way using a technique called Diffi-Hellman can be used to create the same symmetric key at each end. It uses private/public keys and an agreed seed. No sensitive data is sent between the two systems.

When setting up ICSF to use cross system, you need to set up keys for both A to B, and for B to A. You can use the same keys and seed for both, but the keys will be different.

If you are setting up several independent systems you will need keys A to B, B to A, A to C, C to A, B to C and C to B etc.

Setting up keys

You can set up a CIPHER key for encrypting data, and you can set up a MAC key for checking that what was sent is the same as what was received. (You hash the data, then encrypt the hashed value. Send the hash along with the data. If both ends do it, they should get the same answer!)

Decrypting and re-encypting

The keys are stored on disk in an encrypted format. The key for this is within the cryptographic hardware. If you want to send a key to another system, you need to encrypt it.

ICSF has a function in the hardware which says “here is some data encrypted with the hardware key, decrypt it, and re-encrypt it with this other key”. It has a matching function “here is some encrypted data, decrypt it, and re-encypt it with the hardware key”. This way the clear text of the key is never seen. To extract a key, you need to provide a key to re-encrypt it.

How to send the key to the remote system?

You have several choices depending on the level of security your enterprise has.

ICSF has a function to export a symmetric key using

  • an RSA public key. At the remote end you need the private key to be able to decrypt it. This needs an RSA key size of at least 2048.
  • an exporter key. At the remote end you need the matching importer key to decrypt it. Under the covers this is a symmetric (AES) key. The AES technique is faster, and “stronger” than RSA (it takes longer to break it). An AES 256 key is considered stronger than an RSA 4096 key.

I think of exporter and importer keys as a generalised public key and private key. The concept may be the same, but the implementation is very different.

The question “should I use RSA or Exporter/Importer” comes down to how secure do you want to be. If you export a key once a week the costs are small.

These keys used to encrypt keys, are known as key encrypting keys, and you often see KEK in the documentation.

What are the advantages of using an exporter and an importer key?

If you are using a symmetric key to encrypt your data, why is there exporter and importer keys, and not just one key?

If a purely symmetric key was used as a key encryption key, this means that if you are allowed to encrypt with it – you can decrypt with it. This then means you can decrypt other people’s data which used the same key.

By having a one key for encryption and another key for decryption you can isolate the authorities. If I have access to the exporter key, although I extract it and send it to my evil colleague at the remote end, they cannot use it to decrypt the data. If I have access to a symmetric key generated with Diffi-Hellman, the same key is used at each end, and so I could extract it and send it to my evil colleague who could use it.

Setting up my first importer or exporter key.

Use a pair of matching ECC public and private keys. The key type (eg Brain Pool) and key size must match. You use CSNBKTB2 to define a skeleton of type importer or exporter (or cipher). You then pass the skeleton into CSNDEDH along with the private and public keys. You can then use CSNBKRC2 to add it to the local CKDS.

You can do this at the remote end, just switch the public and private, and the importer to exporter. You do not need to send a key between the two systems.

Once you have defined your first exporter and importer key there is an alternative way of creating transport keys, using CSNBKGN2.

Alternative way of defining more transport keys.

There are three scenarios to consider when setting up transport keys between two systems.

  • On system A create an exporter key for system A, and create an importer key system for system B. The importer key will need to be encrypted as it need to be send to the system B. The system A (local) exporter key can then use the key to export a (cipher or MAC) key. Then write this key to a file and send it to the other system. At system B, use the importer key to import this data into the system B’s key store.
  • Do it from the “other end”. On system B create an importer key for system B, and create an exporter key for system A. This exporter key will need to be encrypted as it needs to be send to system A. Then write this key to a file and send it to the other system. At system A import it. System A can then use the new key to export a (cipher or MAC) key from that system, write it a file, and send it to system B which can import it.
  • Define them on another system. On system C create an exporter key for system A, and an importer key for system B. These both need to be encrypted (and use different keys to encrypt each one). Send the encrypted keys to system A and system B. They can import the key, and send keys from system A to System B.

You use CSNBKGN2 to do this. I’ll only cover the first two cases.

You need to specify one of

  • keytype1 = “IMPORTER”, keytype 2 = “EXPORTER” or
  • keytype1 = “EXPORTER”, keytype 2 = “IMPORTER”

Use RULE = “OPEX ” where

  • OP means store the first key in the local (OPerational) key store.
  • EX means keytype 2 is to be exported. You have to specify an AES exporter key in the field key_encrypting_key_identifier_2.

Note. To export the key you already need an AES exporter key! This means you cannot create your first transport key with this method.

The output of this function is

  1. a key which you can add to the local key store using CSNBKRC2,
  2. a key which you can write to a file and send it to the remote system.

At the remote system you use CSNDSYI2 and the Importer key to import the data and re-encrypt it with the local key. Then use CSNBKRC2 to add it to the CKDS.

Which technique is better?

I think using a pair of matching ECC public and private keys and Diffie-Hellman is simpler, as it does not involve sending a file between systems. As this activity is done infrequently it may not matter.

Using DH many not be as secure as the alternative.

How many transport keys do I need?

You could create a new transport key every week if you wanted to. It is only used to send a data key to a remote system, so transient. When you have created a cipher key to encrypt and decrypt your data you need to keep this as long as you need access to the data. Once you recreate it – you are unable to read data sets with the previous key.

Depending on your security requirements you might want to have more than one transport key for data isolation. For example test data and production data have different keys.


You could use Enterprise Key Management Foundation (EKMF) from IBM to manage your keys

EASY ICSF – making it easy to use the API to generate and export/import keys

I’ve put some code on GITHUB which has C and REXX code which have a simpler interface to ICSF. The code examples hide a lot of the complexity.

For example to generate an AES CIPHER key the high level C code is

// build the skeleton for C=CIPHER ( could be E for exporter or I for IMporter
//  It returns the skeleton and its length 
rc = skeletonAES("C",& pToken,& lToken); 
if ( rc != 0 ) return rc; 

// Generate the key - passing the skeleton and returning the Token
// input the skeleton 
// output the token 
rc = GENAES2(pToken,&lToken); 
if ( rc != 0 ) return rc; 

// Add this to the CKDS                                                        
rc = addCKDS(pKey,pToken       ,lToken,pReplace); 
if ( rc != 0 ) return rc; 

printf("GENAES %s successful\n",pKey); 
return rc; 

To export an AES key

// Pass in the name of the AES key pKey
// the name of the encryption key (AES EXPORT or PKI) pKek
// Get back the blob  of data
rc =exportAES (pKey,pKek,&pData, &lData); 
 if (rc > 0 ) return rc; 
Write the blob to a file specified by dd
 rc = writeKey("dd:TOKEN",pData,lData); 

It gives in //SYSPRINT

Exists: CSNBKRR2 read AESDHE CKDS rc 0 rs 0 No error found


KEY:AESDHE:INTERNAL SYMMETRI EXPORTER CANAES


Exists: CSNBKRR2 read PKDS2 CKDS rc 8 rs 10012 Key not found
Exists: CSNDKRR read PKDS2 PKDS rc 0 rs 0 No error found .


KEK:PKDS2:INTERNAL PKA RSAPRIV 1024MEAO


RSA ¬AES:Rule:AES PKOAEP2 SHA-256 AES AESKW AES


ExpAESK:CSNDSYX rc 8 rs 2055 The RSA public key is too small to encrypt the DES key

Where…

  • Exists: CSNBKRR2 read AESDHE CKDS rc 0 rs 0 No error found
    • It used the ICSF CSNBKRR2 to check AESDHE is in the CKDS
  • KEY:AESDHE:INTERNAL SYMMETRI EXPORTER CANAES
    • It reports some info on the key. It is a Symmetric (AES) Exporter and can do AES processing
  • Exists: CSNBKRR2 read PKDS2 CKDS rc 8 rs 10012 Key not found
    • This is ok — it looks in the CKDS first – but as this is a PKI – it will not be found
  • Exists: CSNDKRR read PKDS2 PKDS rc 0 rs 0 No error found .
    • It is found in the PKDS
  • KEK:PKDS2:INTERNAL PKA RSAPRIV 1024MEAO
    • This gives info about the Key Encryption Key. It is RSA and has a private key. The key size is 1024
  • RSA ¬AES:Rule:AES PKOAEP2 SHA-256 AES AESKW AES
    • This is the rule used
  • ExpAESK:CSNDSYX rc 8 rs 2055 The RSA public key is too small to encrypt the DES key
    • The size of the PKI key was too small.
    • As well as giving the return code and reason code, it gives the reason for some of the reason codes.
    • When I repeated this with a RSA key with a large enough key – it worked successfully.

There are also some macros such as

  • isRSAPRIV… is this token (blob of data) an RSA private key?
  • isEXPORTER … has this token been defined as an EXPORTER key?

I use these to check keys being used for operations to check that keys are valid for the ICSF operations.

The git hub code is work in progress. As I find problems I’m fixing them, but overall it should you what you can do with it.

How do I put today’s date in JCL?

I have a backup job which I run to take current copy of the file and save it with today’s date. For months I’ve been editing it to change today’s date. 10 minutes of browsing the internet showed me how easy it was!

For example

//MYLIBS1 JCLLIB ORDER=USER.Z24C.PROCLIB
// SET TODAY=D&YYMMDD
//S1 EXEC PROC=BACKUP,P=USER.Z24C.PARMLIB,DD=&TODAY.

The procedure has

//BACKUP PROC P='USER.Z24C.PROCLIB',DD='UNKNOWN'
//S1 EXEC PGM=IKJEFT01,REGION=0M,
// PARM='XMIT A.A DSN(''&P'') OUTDSN(''BACKUP.&DD..&P'')'
//SYSPRINT DD SYSOUT=*
//SYSTSPRT DD SYSOUT=*
//SYSTSIN DD *
// PEND

which gave me

IEFC653I SUBSTITUTION JCL – PGM=IKJEFT01,REGION=0M,PARM=’XMIT A.A DSN(”USER.Z24C.PARMLIB”) OUTDSN(”BACKUP.D210906.USER.Z24C.PARMLIB”)’

I could have used (see here for a complete list)

  • &LYYMMDD for local date
  • &HHMMSS for time
  • &LHHMMSS for local time

Easy – I should have done this years ago!

ICSF: why do I need to have transport keys as well as data encryption keys.

As part of my scenario of encrypting a file and sending the encrypted file to another z/OS system, I struggled to understand why the documentation referred to transportation keys, key encryption keys (KEKs), import keys and export keys.

I found the subject very unclear. As I currently see it (and I’ve changed my view several times). You need the matching key on each system. If this is a symmetric key, it is the same key. If you are using PKI, they keys are asymmetric.

How do you get symmetric keys on both systems. I see there are two ways

  1. Generate the same key on both systems This can be done using private and public keys, and a technique called Diffie-Hellman.
  2. Generate a key on one system, and send it securely to the other system. For this you need to securely package the symmetric keys while they are in transit.

I was able to perform the setup and transfer a file securely to another system without the need for these additional keys. What was I missing?

The discussion about transport keys is for the second example where keys are sent over the network. You can use a CIPHER key to encrypt the key. It comes down to can I do it ? Yes. Should I do it ? No (well, maybe not, it depends on the scale of risk).

Within an IT environment the userid administration should be a different team to the systems programmers. This is to prevent any conflict of interest, fraud, and errors. The system programmers cannot give themselves access to sensitive data. In my small company (with just me in it) I have to do sysprog and userid administration.

IBM has similar guidelines for implementing cryptography. For example

  • Separation of the roles and responsibilities. The people who create keys are different from the people who give access to the keys, and from the people who use the keys.
  • Separation of encryption keys based on what they are used for. A key for encrypting datasets should not be used for encrypting a key to send to a remote system. If a data set encryption key is made public, the key-encryption-key should still be secure.

I could provide isolation of keys by having two keys, one is authorised only for data set encryption and the other authorised only for key encryption, but this separation may not be enough.

Creating exporter/importer using the API

I spent a couple of days trying to create an importer/exporter pair. I found one way of doing it – there may be other (more obscure) ways. It uses Diff-Hellman to create the same key on two sites without transferring sensitive material. I describe it here. It requires each side to have its own private key, and the public key of the other side.

There are three parts

  • Generate a skeleton
  • Use the skeleton, private key and public key to generate the Diffi-Hellman key
  • store it in the key store

Exporter:Generate a skeleton

I used CSNBKTB2 with rules ‘AES ‘||’INTERNAL’||’EXPORTER’.

Exporter:Generate the DH Key

I have a “helper” rexx function which has parameters, private key name, public key name, the completed skeleton.

It used CSNDEDH with

  • rule_array = ‘DERIV01 ‘||’KEY-AES ‘
  • party_identifier (a string both sides agree) = ‘COLINS-id’
  • KEK_key_identifier_length = 0. This is used when the private key is not stored in the PKDS, but passed in encrypted. I think of this as acting as a proxy. “Here is the private key to use – but it has been encrypted with the KEK which is in your local key store”. Setting the length to zero says the private key>is< in the local key store. Definitely an advanced topic!
  • Name of side A’s private key in the PKDS
  • Name of side B’s public key (from the other side) in the PKDS
  • key_bit_length = 256.

It returns a blob encrypted with the local master key.

Exporter:addckds

This is another rexx helper. It takes the name of the key to generate, the encrypted blob, and “replace=Y|N”

This uses

  • CSNBKRC2 to add to the CKDS
  • if it gets record found, and needs to delete it,
    • it invokes delckds which uses CSNBKRD to delete it
    • it tries the add again

Importer ( on the remote system)

The steps are the same, except

  • I used CSNBKTB2 with rules ‘AES ‘||’INTERNAL’||’IMPORTER’.
  • Generate the DH key, you use the other keys, side B’s private, and side A’s public.

To export a key using exporter/importer

If you are using an AES exporter key to encrypt the data you need to use CSNDSYX with

  • The name of the key you want to export
  • The label of the AES exporter key
  • rule_array = ‘AES ‘||’AESKW ‘

It returns a blob which you can write to a data set.

To import the key using exporter/importer

read the data into a buffer

Use CSNDSYI2 with

  • rule_array = ‘AES ‘||’AESKW ‘
  • the name of the importer key

It returns a blob of data.

Use the helper addckds passing the new label name, the blob of data, and replace=yes|no.

  • This uses CSNBKRC2 to add the record, with rule_array = ”
  • If the record exists and replace=yes then
    • use delckds with CSNBKRD and rule_array = ‘LABEL-DL’
    • re-add it

To export a key using PKI public/private keys

If you are using an PKI cipher key to encrypt the data you need to use CSNDSYX with

  • The name of the key you want to export
  • The label of the PKI public key
  • rule_array = ‘AES ‘||’PKOAEP2 ‘

To import the key using pki private key

read the data into a buffer

Use CSNDSYI2 with

  • rule_array = ‘AES ‘||’PKOAEP2 ‘, matching the exporter
  • the name of the private key

It returns a blob of data.

Use the helper addckds passing the new label name, the blob of data, and replace=yes|no.

  • This uses CSNBKRC2 to add the record, with rule_array = ”
  • If the record exists and replace=yes then
    • use delckds with CSNBKRD and rule_array = ‘LABEL-DL’
    • re-add it

ICSF: exploiting Rexx

ICSF provides APIs and commands to manage cryptographic keys. For example to encrypt a datasets you need to define the key that will be used.

You can use Rexx to use the API’s and make your own commands.

There are some Rexx samples provided with ICSF, and there are others on the internet if you search for the API function and Rexx. These tend to be a large Rexx exec written to do one function.

You can use the power of Rexx to allow significant reuses of these execs, by having one Rexx exec to generate a key, another Rexx exec to add it to the keystore, another Rexx exec to export it, and another Rexx exec to import it.

Background

Rexx Address linkpgm facility

With TSO Rexx there is the “address linkpgm” command environment. This allows you to call z/OS functions with Rexx parameters.

For example

rc = 0
y=”Mystring”
z= 16
address linkpgm “ZOSPROG myrc Y Z”

generates the standard low level request

call ZOSPROG(addr(myrc),addr(y),addr(z));

It returns a variable ‘RC’ for example -3 if the program is not found, or the return code from the program.

Be careful not to specify ‘RC’ as a parameter as it may override it.

If does what you tell it. If you are expecting a string to be returned, then the variable you give it must be big enough to hold the data, it cannot allocate a bigger string.

If you want to create a variable of a fixed size you can use

token         = copies(’00’x,3500);

If you are passing a number or hex string, you have to convert it to the internal value.

For example on input

myInt = ‘00000000’x
mylen = C2D(length(“ABCDEFG”),4) /* the 4 says make field 4 (int) wide */

on output, convert the hex return code to a readable hex code

myrc = c2x(myrc)

To create an internal format length you can use either of

lToken = ‘00001964’x /* 6500 */
lToken = d2c(6500,4); /* of size 4*/

Passing parameters to external Rexx programs

You can call external Rexx programs and get a returned data. For example

with the program mycode

parse arg a,b
return 0 “COLINS”||A b||”xxx”

and call it using

zz = mycode(“AA”,”B”)
say zz
parse var zz rc x y
say rc
say x
say y

gives

0 COLINSAA Bxxx
0
COLINSAA
Bxxx

Using this you can have an external function which generates an AES key, which returns the return code, reason code and the data.

Using hex strings

Many of the ICSF functions return a hex structure. You can convert this from internal using the Rexx function c2x. This takes a string and creates the hex version of it. When you want to use it in another ICSF function you convert it back again using x2c().

x = ‘ABC’
y = c2x(x)
say ‘y:’y /* gives y:C1C2C3 */

When an ICSF function returns data, you can convert it to the hex string, and return it to the caller.

Using lengths

If a hex length has been returned, you can convert it to a Rexx number using C2D

x = ‘00000000c’x
say ‘x:’c2d(x) /* prints x:12 */

Converting from Rexx to internal format

x = 14
y = d2c(x,4) /* a 4 byte field */
say ‘x:’c2x(y) /* display in hex gives x:0000000E */

Using ICSF from Rexx

Using the program

/*********************************************/ 
/* Generate a 256-bit AES DATA key to export */ 
/*********************************************/ 
rc = genAES() /* this returns several bits of data*/
say "CPBKGN " rc 

parse var rc myrc myrs key 
if myrc <> 0 then return rc 
                                                                         
/********************************************/ 
/* Store the AES DATA key in the CKDS       */ 
/********************************************/ 
/* just return code */ 
rc= addCKDS("REXXLABEL",key) 
say "CPBkrc2" rc 
return 0 

And GENAES

say "In GenAES" 
parse arg a  /* no parameters passed in */ 
/********************************************/ 
/* Generate a 256-bit AES DATA key to export*/ 
/********************************************/ 
key_form               = 'OP  ' 
key_length             = 'KEYLN32 ' 
key_type_1             = 'AESDATA ' 
key_type_2             = '' 
kek_id_1               = COPIES('00'x,64) 
kek_id_2               = '' 
generated_key_id_1 = COPIES('00'x,64) 
generated_key_id_2 = '' 
                                                                   
myrc             = 'FFFFFFFF'x 
myrs              = 'FFFFFFFF'x 
exit_length = d2c(0,4)
exit_data       = '' 
ADDRESS linkpgm "CSNBKGN", 
   'myrc'               'myrs'          , 
   'exit_data_length'   'exit_data'     , 
   'key_form'           'key_length'    , 
   'key_type_1'         'key_type_2'    , 
   'kek_id_1'           'kek_id_2'      , 
   'generated_key_id_1' 'generated_key_id_2' 
 
myrc = c2d(myrc)
myrs = c2d(myrs)                                                                 
IF (myc <> 0 ) THEN 
  DO 
    SAY 'KGN Failed   (rc='myrc' rs='myrs')' 
    Return  myrc myrs 
  END 
                                                                  
Return  myrc myrs c2x(generated_key_id_1)
                                                                         

ADDCKDS

/* -------------------------------------------*/ 
/*  Add CKDS : label and data                 */ 
/* CSNBKRC2 - Key Record Create2              */ 
/* -------------------------------------------*/ 
parse arg label, token 
say "CPBKRC2 " label token 
myrc = 'FFFFFFFF'x 
myrs = 'FFFFFFFF'x 
exit_length =d2c(0,4)
exit_data = '' 
rule_count = d2c(0,4)
rule_array = '' 
token_length = d2c(64,4)
token =x2c(token) 
LEFT(data,64) /* Make sure string length = 64 */ 
ADDRESS LINKPGM "CSNBKRC2", 
   'myrc'          'myrs'            , 
   'exit_length'   'exit_data'       , 
   'rule_count'    'rule_array'      , 
   'label'         'token_length'   , 
   'token'                                                              
myrc = c2d(myrc)
myrs = c2d(myrs)                                                                 
IF (myrc <> 0 ) THEN 
    /* print the return code and description text */
    SAY 'KRC2 Failed   (rc='myrc' rs='myrs')',
             cprs(myrc,myrs)
    RETURN  myrc myrs 
  END 
                                                                 
RETURN   myrc myrs 

and the printable reason code

/* exec to give back reason code string from passed value */ 
parse arg rc,rs 

v.= "Not listed" rs 
v.762="The key values structure for CSNDPKB has a field in error"||, 
            "A length or format is not correct" 
v.2012="The rule_array_count parameter contains a" ||, 
           " number that is not valid." 
v.2016="Rule Array wrong" 
v.2040="Wrong key type.   For example PKI when Importer was expected" 
v.2054="RSA:OAEP optional encoding parameters failed validation" 
v.2089="The algorithm does not match the algorithm"||, 
           " of the key identifier" 
v.10012="Key not found" 
....
return v.rs 

Notes:

I converted from a string to a hex representation of the string when passing data around because the hex data could have a blanks in it. Using the Rexx parse var x a b c parses on blank delimited words, and imbedded blanks could cause a mis-parse.