The first command takes my existing (expired) certificate belonging to userid START1 and creates a certificate request in the data set. The request looks like
-----BEGIN NEW CERTIFICATE REQUEST----- MIIBgjCCAQcCAQAwNzEUMBIGA1UEChMLTkVXVEVDQ1RFU1QxDDAKBgNVBAsTA1NT ... qZgQtwIwbYYgRWDQcPOZ92sVszf5Bv+mslcDjNAuM5Sj4Z9uadnKsaTmiy6h16tr TpPAW84d -----END NEW CERTIFICATE REQUEST-----
The Gencert command renews it with the specified date. If you omit the date it defaults to a year from the start date.
With most of my gencert requests, I have specified information like
“has an incorrect date range”, the date range of the certificate being added is not within the date range established by the CA (certificate authority) certificate.
This is a hint that I need to renew my CA certificate as it will expire in the next two years.
After the gencert command was successful, the list command gave
IRRSEQ00 also known as R_ADMIN can be used by an application to issue RACF commands, or extract information from RACF. It is used by the Python interface to RACF pysear.
Using this was not difficult – but has it challenges (including a designed storage leak!).
The documentation explains how to search through the profiles.
The notes say
When using extract-next to return all general resource profiles for a given class, all the discrete profiles are returned, followed by the generic profiles. An output flag indicates if the returned profile is generic. A flag can be specified in the parameter list to request only the generic profiles in a given class. If only the discrete profiles are desired, check the output flag indicating whether the returned profile is generic. If it is, ignore the entry and terminate your extract-next processing.
To search for all of the profiles, specify a single blank as the name, and use the extract_next value.
There are discrete and generic profiles. If you specify flag bit 0x20000000 For extract-next requests: return the next alphabetic generic profile. This will not retrieve discrete profiles. If you do not specify this bit, you get all profiles.
This is where it gets hard.
The output is returned in a buffer allocated by IRRSEQ00. This is the same format as the control block used to specify parameters. After a successful request, it will contain the profile, and may return all of the segments (such as a userid’s TSO segment depending on the option specified).
Extract the information you are interested in.
Use this data as input to the irrseq00 call. I set used pParms = buffer;
After the next IRRSEQ00 request FREE THE STORAGE pointed to by pParms.
The output storage is obtained in the subpool specified by the caller in the Out_message_subpool parameter. It is the responsibility of the caller to free this storage.
I do not know how to issued a FREEMAIN/STORAGE request from a C program! Because you cannot free the z/OS storage from C, you effectively get a storage leak!
I expect the developers did not think of this problem. Other RACF calls get the back in the same control block, and you get a return code if the control block is too small.
#pragma linkage(IRRSEQ00 ,OS) // Include standard libraries */ #include <stdio.h> #include <stdlib.h> #include <string.h> #include <inttypes.h>
int main( int argc, char *argv??(??)) { // this structure taken from pysear is the parameter block typedef struct { char eyecatcher[4]; // 'PXTR' uint32_t result_buffer_length; // result buffer length uint8_t subpool; // subpool of result buffer uint8_t version; // parameter list version uint8_t reserved_1[2]; // reserved char class_name[8]; // class name - upper case, blank pad uint32_t profile_name_length; // length of profile name char reserved_2[2]; // reserved char volume[6]; // volume (for data set extract) char reserved_3[4]; // reserved uint32_t flags; // see flag constants below uint32_t segment_count; // number of segments char reserved_4[16]; // reserved // start of extracted data char data[1]; } generic_extract_parms_results_t; // Note: This structure is used for both input & output.
Set up the irrseq00 parameters
I want to find all profiles for class ACCTNUM. You specify a starting profile of one blank, and use the get next request.
char work_area[1024]; int rc; long SAF_RC,RACF_RC,RACF_RS; long ALET = 0;
char Function_code = 0x20; // Extract next general resource profile // RACF is ignored for problem state char RACF_userid[9]; char * ACEE_ptr = 0; RACF_userid[0]=0; // set length to 0
char Out_message_subpool = 1; char * Out_message_string; // returned by program
For TESTGEN* the flag is 0x10 which is On output: indicates that the profile returned by RACF is generic. When using extract-next to cycle through profiles, the caller should not alter this bit. For the others this bit is off, meaning the profiles are discrete.
I was setting up a script to compile some C code in Unix Services, and it worked – when I expected the bind to fail because I had not specified where to find a stub file.
How to compile the source
I used a shell script to compile and bind the source. I was surprised to see that it worked, because it needed some Linkedit stubs from CSSLIB. I thought I needed
* FUNCTION: z/OS V2.4 XL C/C++ Compiler Configuration file * * Licensed Materials - Property of IBM * 5650-ZOS Copyright IBM Corp. 2004, 2018. * US Government Users Restricted Rights - Use, duplication or * disclosure restricted by GSA ADP Schedule Contract with IBM Corp. *
It is easy to find the answer when you know the solution.
Note:
Without the export _C89_CCMODE=1
I got
IEW2763S DE07 FILE ASSOCIATED WITH DDNAME /0000002 CANNOT BE OPENED BECAUSE THE FILE DOES NOT EXIST OR CANNOT BE CREATED. IEW2302E 1031 THE DATA SET SPECIFIED BY DDNAME /0000002 COULD NOT BE FOUND, AND THUS HAS NOT BEEN INCLUDED. FSUM3065 The LINKEDIT step ended with return code 8.
Compiling in 64 bit
It was simple to change the script to compile it in 64 bit mode, but overall this didn’t work.
When I compiled in 64 bit mode, and tried to bind in 31/32 bit mode (omitting the -q64 option) I got messages like
IEW2469E 9907 THE ATTRIBUTES OF A REFERENCE TO isprint FROM SECTION irrseq#C DO NOT MATCH THE ATTRIBUTES OF THE TARGET SYMBOL. REASON 2 ... IEW2469E 9907 THE ATTRIBUTES OF A REFERENCE TO IRRSEQ00 FROM SECTION irrseq#C DO NOT MATCH THE ATTRIBUTES OF THE TARGET SYMBOL. REASON 3
IEW2456E 9207 SYMBOL CELQSG03 UNRESOLVED. MEMBER COULD NOT BE INCLUDED FROM THE DESIGNATED CALL LIBRARY. ... IEW2470E 9511 ORDERED SECTION CEESTART NOT FOUND IN MODULE. IEW2648E 5111 ENTRY CEESTART IS NOT A CSECT OR AN EXTERNAL NAME IN THE MODULE.
IEW2469E THE ATTRIBUTES OF A REFERENCE TO … FROM SECTION … DO NOT MATCH THE ATTRIBUTES OF THE TARGET SYMBOL. REASON x
Reason 2 The xplink attributes of the reference and target do not match.
Reason 3 Either the reference or the target is in amode 64 and the amodes do not match. The IRRSEQ00 stub is only available in 31 bit mode, my program was 64 bit amode.
IEW2456E SYMBOL CELQINPL UNRESOLVED. MEMBER COULD NOT BE INCLUDED FROM THE DESIGNATED CALL LIBRARY.
The compile in 64 bit mode generates an “include…” of the 64 bit stuff needed by C. Because the binder was in 31 bit, it used the 31 bit libraries – which did not have the specified include file. When you compile in 64 bit mode you need to bind with the 64 bit libraries. The compile command sorts all this out depending on the options.
The libraries used when binding in 64 bit mode are syslib_x = cee.sceebnd2:cbc.sccnobj:sys1.csslib. See the xlc config file above.
IEW2470E 9511 ORDERED SECTION CEESTART NOT FOUND IN MODULE. IEW2648E 5111 ENTRY CEESTART IS NOT A CSECT OR AN EXTERNAL NAME IN THE MODULE.
Compiling in 64 bit mode, generates an entry point of CELQSTRT instead of CEESTART, so the binder instructions for 31 bit programs specifying the entry point of CEESTART will fail.
Overall
Because IRRSEQ00 only comes in a 31 bit flavour, and not a 64 bit flavour, I could not call it directly from a 64 bit program, and I had to use a 32 bit compile and bind.
The Python package pysear to work with RACF is great. The source is on github, and the documentation starts here. It is well documented, and there are good examples.
I’ve managed to do a lot of processing with very little of my own code.
One project I’ve been meaning to do for a time is to extract the contents of a RACF database and compare them with a different database and show the differences. IBM provides a batch program, and a very large Rexx exec. This has some bugs and is not very nice to use. There is a Rexx interface, which worked, but I found I was writing a lot of code. Then I found the pysear code.
Background
The data returned for userids (and other types of data) have segments. You can display the base segment for a user.
tso lu colin
To display the tso base segment
tso lu colin tso
Field names returned by pysear have the segment name as a prefix, for example base:max_incorrect_password_attempts.
The trait “base:active_classes” is list of classes [“DATASET”, “USER”,…]
The trait is
Trait
base:active_classes
RACF Key
classact
Data Types
string
Operators Allowed
"add", "remove"
Supported Operations
"alter", "extract"
Because it is a list, you can add or remove an element, you do not use set or delete which would replace the whole list.
Some traits, such as use counts, have Operators Allowed of N/A. You can only extract and display the information.
My second query
What are the userids in RACF?
The traits are listed here, and code examples are here.
I used
from sear import sear
import json
# get all userids begining with ZWE
users = sear(
{
"operation": "search",
"admin_type": "user",
"userid_filter": "ZWE",
},
)
profiles = users.result["profiles"]
# Now process each profile in turn.
# because this is for userid profiles we need admin_type=user and userid=....
for profile in profiles:
user = sear(
{
"operation": "extract",
"admin_type": "user",
"userid": profile,
},
)
segments = user.result["profile"]
#print("segment",segments)
for segment in segments: # eg base or omvs
for w1,v1 in segments[segment].items():
#print(w1,v1)
#for w2,v2 in v1.items():
# print(w1,w2,v2 )
json_data = json.dumps(v1 , indent=2)
print(w1,json_data)
{
"commands": [
{
"command": "PERMIT MVS.DISPLAY.* CLASS(OPERCMDS)ACCESS (CONTROL) ID(ADCDG)",
"messages": [
"ICH06011I RACLISTED PROFILES FOR OPERCMDS WILL NOT REFLECT THE UPDATE(S) UNTIL A SETROPTS REFRESH IS ISSUED"
]
}
],
"return_codes": {
"racf_reason_code": 0,
"racf_return_code": 0,
"saf_return_code": 0,
"sear_return_code": 0
}
}
Error handling
Return codes and errors messages
There are two layers of error handling.
Invalid requests – problems detected by pysear
Non zero return code from the underlying RACF code.
If pysear detects a problem it returns it in
result.result.get("errors")
For example you have specified an invalid parameter such as “userzzz“:”MINE”
If you do not have this field, then the request was passed to the RACF service. This returns multiple values. See IRRSMO00 return and reason codes. There will be values for
SAF return code
RACF return code
RACF reason code
sear return code.
If the RACF return code is zero then the request was successful.
To make error handling easier – and have one error handling for all requests I used
try: result = try_sear(search) except Exception as ex: print("Exception-Colin Line112:",ex) quit()
Where try_sear was
def try_sear(data): # execute the request result = sear(data) if result.result.get("errors") != None: print("Request:",result.request) print("Error with request:",result.result["errors"]) raise ValueError("errors") elif (result.result["return_codes"]["racf_reason_code"] != 0): rcs = result.result["return_codes"] print("SAF Return code",rcs["saf_return_code"], "RACF Return code", rcs["racf_return_code"], "RACF Reason code",["racf_reason_code"], ) raise ValueError("return codes") return result
Overall
This interface is very easy to do. I use it to extract definitions from one RACF database, save them as JSON files. Repeat with a different (historical) RACF database, then compare the two JSON files to see the differences.
Note: The sear command only works with the active database, so I had to make the historical database active, run the commands, and switch back to the current data base.
The command (CLASS(DATASET) is the default, so can be omitted)
SEARCH MASK(COLIN,44)
gave me profiles starting with COLIN containing 44
COLIN.MQ944.** (G)
List a profile LISTDSD
tso listdsd dataset('COLIN.MQ944.**')
gave
INFORMATION FOR DATASET COLIN.MQ944.** (G)
LEVEL OWNER UNIVERSAL ACCESS WARNING ERASE ----- -------- ---------------- ------- ----- 00 COLIN NONE NO NO b... YOUR ACCESS CREATION GROUP DATASET TYPE ----------- -------------- ------------ ALTER SYS1 NON-VSAM
tso listdsd dataset('COLIN.MQ944.SOURCE')
gave
ICH35003I NO RACF DESCRIPTION FOUND FOR COLIN.MQ944.SOURCE
You need the generic option
tso listdsd dataset('COLIN.MQ944.SOURCE') generic
gave
INFORMATION FOR DATASET COLIN.MQ944.** (G)
LEVEL OWNER UNIVERSAL ACCESS WARNING ERASE ----- -------- ---------------- ------- ----- 00 COLIN NONE NO NO ... YOUR ACCESS CREATION GROUP DATASET TYPE ----------- -------------- ------------ ALTER SYS1 NON-VSAM
This says
If I was to use data set ‘COLIN.MQ944.SOURCE’, RACF would check profile COLIN.MQ944.**, and I would have ALTER access to it.
With RACF you can define a profile and give userids access to it. You can also define a global profile for high used datasets, so the profile is cached, and no I/O is needed to the RACF dataset.
For some resources used very frequently, you can cache definitions in memory. These are called GLOBAL definitions. When a check is made for a userid to access a resource, if the definition is a global definition, then there should be no RACF database I/O, and should be fast.
Define a global resource
You need to set up the global resource before you can use it. See the IBM documentation.
SETROPTS GLOBAL(DATASET) RDEFINE GLOBAL DATASET SETROPTS GLOBAL(DATASET) REFRESH
and
RALTER GLOBAL DATASET ADDMEM('SYS1.HELP'/READ) ADDSD 'SYS1.HELP' UACC(READ) SETROPTS GLOBAL(DATASET) REFRESH
to define a resource. It gives a default of read access to the data set SYS1.HELP.
You can display the contents of the global data set class
rlist global dataset
which gives
CLASS NAME ----- ---- GLOBAL DATASET ... RESOURCES IN GROUP --------- -- ----- SYS1.HELP/READ ...
You can delete a global profile
RALTER GLOBAL DATASET DELMEM('SYS1.HELP'/READ) SETROPTS GLOBAL(DATASET)
You can remove the global dataset class if there are no elements in the glas
RDElete GLOBAL DATASET SETROPTS NOGLOBAL(DATASET) SETROPTS GLOBAL(DATASET) REFRESH
If you now list the global profile
rlist global dataset
gives
ICH13003I DATASET NOT FOUND
I’m guessing that if you want READ access to the SYS1.HELP data set, the entry in the GLOBAL DATASET will be found. If you want UPDATE access to the SYS1.HELP data set, because there is no entry in the GLOBAL DATASET, checking will fall through to the normal profiles defines like ADDSD.
You do not need to configure the GLOBAL DATASET, but it can give performance benefits, if you are on a heavily used system. It is not enabled on my one person zD&Y system.
Beware
In the documentation it also defines a “normal” profile like “ADDSD ‘SYS1.HELP’ UACC(READ)”. I’m guessing that this is a fall back if someone deactivates the global dataset profiles.
So you should read the documentation and follow its instructions.
Signed certificates are very common, but I was asked how I connected my laptop to my server, in the scenario “one up” from a trivial example.
Basic concepts
A private/public key pair are generated on a machine. The private stays on the machine (securely). The public key can be sent anywhere.
A certificate has ( amongst other stuff)
Your name
Address
Public key
Validity dates
Getting a signed certificate
When you create a certificate: it does a checksum of the contents of the certificate, encrypts the checksum with your private key, and attaches this encrypted value to the certificate.
Conceptually, you go to your favourite Certificate Authority (UKCA) building and they Sign it
They check your passport and gas bill with the details of your certificate.
They attach the UKCA public key to your certificate.
They do a checksum of the combined documents.
They encrypt the checksum with the the UKCA private key, and stick this on the combined document.
You now have a signed certificate, which you can send it to anyone who cares.
Using it
When I receive it, and use it
my system compares my copy of the UKCA public certificate with the one in your certificate – it matches!
Using (either) UKCA public certificate – decrypt the encrypted checksum
Do the same checksum calculation – and the two values should match.
If they match I know I can trust the information in the certificate.
This means the checking of the certificate requires the CA certificate that signed it.
To use a (Linux) certificate on z/OS you either need to
issue the RACF GENCERT command on the Linux .csr file, export it, then download it to Linux. The certificate will contain the z/OS CA’s certificate.
import the Linux CA certificate into RACF (This is the easy, do once solution.)
then
connect the CA certificate to your keyring, and usually restart your server.
Setting up my system
If the CA certificate is not on your system, you need to import it from a dataset.
You can use FTP, or use cut and paste to the dataset.
Once you have the CA certificate in your RACF database you can connect it to your keyring.
Certificate: Data: ... Issuer: C = GB, O = DOC, OU = CA, CN = LINUXDOCCA256 ... Subject: C = GB, O = DOC, OU = CA, CN = LINUXDOCCA256 ... X509v3 extensions: ... X509v3 Basic Constraints: critical CA:TRUE, pathlen:0 X509v3 Key Usage: Digital Signature, Certificate Sign, CRL Sign ...
Where it has CA:TRUE and X509v3 Key Usage:Certificate Sign
Which allows this to be used to sign certificates.
Installing the CA certificate on z/OS
You need to copy the docca256.pem file from Linux to a z/OS dataset (Fixed block, lrecl 80, blksize 80) you can use FTP or cut and paste. I used dataset COLIN.DOCCA256.PEM.
Import it into z/OS, and connect it to the START1.MYRING keyring as a CERTAUTH.
The title When tracing a job it helps to trace the correct address space is a clue – it looks obvious, but the problem was actually subtle.
The scenario
I was testing the new version of Zowe, and one of the components failed to start because it could not find a keyring. Other components could find it ok. I did a RACF trace and there were no records. The question is why were there no records?
The execution environment.
I start Zowe with S ZOWE33. This spawns some processes such as ZOWE335. This runs a Bash script which starts a Java program.
I start a GTF trace with
s gtf.gtf,m=gtfracf #set trace(callable(type(41)),jobname(Zowe*))
Where callable type 41 is for r_datalib services to access a keyring.
No records were produced
What is the problem? Have a few minute pause to think about it.
Solution
After 3 days I stumbled on the solution – having noticed, but ignored the evidence. I wondered if the Java code to process keyrings, did not use the R_datalib API, I wondered if Java 21 uses a different jar file for processing keyrings – yes – but this didn’t solve the problem.
The solution was I should have been tracing job ZWE33CS! Whoa – where did that come from?
When a new z/OS® UNIX process is started, it runs in a z/OS UNIX initiator (a BPXAS address space). By default, this address space has an assigned job name of userIDx, where userID is the user ID that started the process, and x is a decimal number. You can use the _BPX_JOBNAME environment variable to set the job name of the new process. Assigning a unique job name to each … process helps to identify the purpose of the process and makes it easier to group processes into a WLM service class.
If I use the command D A,L it lists all of the address spaces running on the system. I had seen the ZOWE33* ones, and also the ZWE* ones – but ignored the ZWE* ones. Once I knew the solution is was so obvious.
USER ID FROMJOB RESULT TIME APPL FORUSER USER NAME -------- -------- -------- -------- -------- --------- ------------ ZWESVUSR ZWE1AZ SUCCESS 15:00:55 MQWEB COLIN ZOWE SERVER
Which shows from Job ZWE1AZ running with userid ZWESVUSR; it successfully created a pass ticket for userid COLIN with application MQWEB.
Show where the pass ticket is used
Once the pass ticket had been used, I used the following JCL to display the JOBINIT audit record.
USER ID RESULT TIME JOBNAME APPL SESSTYPE PTOEVAL PSUCC -------- -------- -------- -------- -------- -------- -------- -------- COLIN SUCCESSP 15:01:02 CSQ9WEB MQWEB OMVSSRV YES YES COLIN RACINITD 15:01:02 CSQ9WEB MQWEB OMVSSRV NO NO
The first record shows,
in job CSQ9WEB,
running with APPLication id of MQWEB.
Sesstype OMVSSVR is a z/OS UNIX server application. See RACROUTE TYPE=VERIFY under SESSION=type.
userid COLIN SUCCCESSfully logged on with Passticket (SUCCESSP)
PTOEVAL – YES the supplied password was evaluated as a PassTicket,
PSUCC – YES the supplied password was evaluated successfully as a PassTicket.
The second record shows RACINITD (Successful RACINIT deletion) for the userid COLIN in the job CSQ9WEB, and the password was not used.
This blog post was written as background to some blog posts on Zowe API-ML. It provides back ground knowledge for HTTPS servers running on z/OS, and I think it is useful on its own. Ive written about an MQWEB server – because I have configured this on my system.
I want to manage my z/OS queue manager from my Linux machine.I have several ways of doing it.
Which architecture?
Use an MQ client. Establish a client connect to the CHINIT, and use MQPUT and MQGET administration messages to the queue manager.
You can issue a command string, and get back a response string which you then have to parse
You can issue an MQINQ API request to programmatically query attributes, and get the values back in fields. No parsing, but you have to write a program to do the work.
Use the REST API. This is an HTTP request in a standard format into the MQWEB server.
You can issue a command string, and get back a response string which you then have to parse to extract the values.
You can issue a JSON object where the request is encoded in a URL, and get the response back in JSON format. It is trivial to extract individual fields from the returned data.
Connecting to the MQWEB server
If you use REST (over HTTPS) there are several ways of doing this
You can connect using userid and password. It may be OK to enter your password when you are at the keyboard, but not if you are using scripts and you may be away from your keyboard. If hackers get hold of the password, they have weeks to use it, before the password expires. You want to give your password once per session, not for every request.
You can connect using certificates, without specifying userid and password.
It needs a bit of set up at the server to map your certificate to a userid.
It takes some work to set up how to revoke your access, if you leave the company, or the certificate is compromised.
Your private key could be copied and used by hackers. There is discussion about reducing the validity period from over a year to 47 days. For some people this is still too long! You can have your private certificate on a dongle which you have to present when connecting to a back end. This reduces the risk of hackers using your private key.
You can connect with a both certificate and userid and password. The certificate is used to establish the TLS session, and the userid and password are used to logon to the application.
You can use a pass ticket. You issue a z/OS service which, if authorised, generates a one time password valid for 10 minutes or less. If hackers get hold of the pass ticket, they do not have long to be able to exploit it. The application generating the pass ticket, does not need the password of the userid, because the application has been set up as trusted.
You can use a JSON Web Token (JWT). This has some similarities with certificates. In the payload is a userid value and issuer value . I think of issuer as the domain the JWT has come from – it could be TEST or a company name. From the issuer value, and IP address range, you configure the server to specify a realm value. From the userid and realm you can map this to a userid on the server. This JWT can be valid from minutes to many hours (but under a day). The userid and realm mapping to a userid is different to certificate mapping to a userid.
Setting up a pass ticket
The passticket is used within the sysplex. It cannot be used outside of a sysplex. The pass ticket is a password – so needs to be validated against the RACF database.
The application that generates the pass ticket must be authorised to a profile for the application. For example, define the profile for the application TSO on system S0W1, the profile is TSOS0W1.
RDEFINE PTKTDATA TSOS0W1
and a profile to allow a userid to create a pass ticket for the application
Userids COLIN and IBMUSER can issue the callable service IRRSPK00 to generate a pass ticket for a user for the application TSOS0W1.
The output is a one-use password which has a validity of up to 10 minutes.
As an example, you could configure your MQWEB server to use profile name MQWEB, or CSQ9WEB.
How is it used
A typical scenario is for an application running on a work station to issue a request to an “application” on z/OS, like z/OSMF, to generate a pass ticket for a userid and application name.
The client on the work station then issues a request to the back end server, with the userid and pass ticket. If the back end server matches the application name then the pass ticket will be accepted as a password. The logon will fail if a different application is used, so a pass ticket for TSO cannot be used for MQWEB. This is more secure than sending a userid and password up with every back end request, but there is additional work in creating the pass ticket, and two network flows.
This solution scales because very little work needs to be done on the work station, and there is some one-off work for the setup to generate the pass tickets.
The JWT sent from the client has an expiry time. This can be from seconds to hours. I think it should be less than a day – perhaps a couple of hours at most. If a hacker has a copy of the JWT, they can use it until it expires.
The back end server needs to authenticate the token. It could do this by having a copy of the public certificate in the server’s keyring, or send a request down to the originator to validate it.
If validation is being done with public certificates, because the client’s private key is used to generate the JWT, the server needs a copy of the public certificate in the server’s keyring. This can make it hard to manage if there are many clients.
For this entry to be used various parameters need to match
The issuerIdentifier. This string identifies the client. It could be MEGABANK, TEST, or another string of your choice. It has to match what is in the JWT.
signatureAlgorithm. This matches the incoming JWT.
trustAliasName and trustStoreRef. These identify the certificate used to validate the certificate
remoteAddress. This is the address, or address range of the client’s IP addresses.
If you have 1000 client machines, you may need 1000 <openidConnectClient…/> definitions, because of the different certificate and IP addresses.
You may need 1000 entries in the RACMAP mapping of userid + realm to userid to be used on the server.
How is it used
You generate the JWT. There are different ways of doing this.
Use a service like z/OSMF
Use a service on your work station. I have used Python to do this. The program is 30 lines long and uses the Python jwt package
You get back a long string. You can see what is in the string by pasting the JWT in to jwt.io. You pass this to the backend as a cookie. The cookie name depends on what the server is expecting. For example
'Authorization': "Bearer " + token
The JWT has limited access
For the server to use the JWT, it needs definitions to recognise it. If you have two back end servers
Both servers could be configured to accept the JWT
If the server specified a different REALM, then the mapped userid from the JWT could be different for each server because the userid/realm to userid mapping can be different.
One server is configured to accept the JWT
If only one server has the definitions for the JWT, then trying to use the JWT to logon to another server will fail.