Understanding LTPA tokens for accessing a web site.

What is an LTPA token? – the short answer

When a client connects to a web server and logs on (for example with a userid and password), the server can send back a cookie containing encrypted logon information.

When the client sends this cookie in the next request, the server decrypts it, and so shortens the logon process. Eventually the token will expire and the client will need to logon again.

What is an LTPA token? – a longer answer.

There is a long (perhaps too long) description of LTPA here.

The Lightweight Third-Party Authentication says When accessing web servers that use the LTPA technology it is possible for a web user to re-use their login across physical servers.

The server has an encryption key which it uses to encrypt the “reuse” content. This key could change every time the server start, in fact it is good practice to change the key “often”.
For Liberty the information is configured in the <ltpa…/> tag. See Configuring LTPA in Liberty.

If a client has an LTPA2 token, and the server restarts, if they encryption keys are the same as before, the LTPA2 will be accepted (providing it hasn’t expired). If they encryption key has changed, the client will need to logon to get a new LTPA.

How to configure LTPA?

The ltpa tag in server.xml looks like

<ltpa keysFileName="yourLTPAKeysFileName.keys" 
      keysPassword="keysPassword" 
      expiration="120" />

Where

  • keysFileName defaults to ${server.output.dir}/resources/security/ltpa.keys Note: This is a file, not a keyring.
  • expiration defaults to 120 minutes.

Often password or key data is just base64 encoded, so it trivial to decode. You can encrypt these using the Liberty securityUtility command. The createLTPAKeys option creates a set of LTPA keys for use by the server, or that can be shared with multiple servers. If no server or file is specified, an ltpa.keys file is created in the current working directory.

Single server mode

If the web server is isolated and has its own encryption key then the LTPA can be passed to the server. It can use its encryption key to decrypt the userid information and use it.

If you try to use the LTPA on another web server, which has a different encryption key, the decryption will fail, and the LTPA cannot be used.

Sharing an encryption key

If multiple web servers share the same key, then the LTPA can be used on those servers. For example, you have multiple back-end servers for availability, and a work request can be routed to any server. Once the client has logged on and got the LTPA, future requests can be routed to any of the servers, without the need to logon. The LTPA can be decrypted because of the shared key.

Does this give a different access?

If MQWEB and z/OSMF share the same encryption key, a client logs on to MQWEB and gets a LTPA. The client then uses the LTPA to logon to z/OSMF. All this does is replace the userid and password. MQWEB and z/OSMF still have to determine what permissions the userid has. LTPA does not affect the access.

What happens when the LTPA expires?

Using CURL to send a request to MQWEB with an expired LTPA, I got

 
< HTTP/2 401
...
* Replaced cookie LtpaToken2_8443="""" for domain 10.1.1.2, path /, expire 786297600
< set-cookie: LtpaToken2_8443=""; Expires=Thu, 01 Dec 1994 16:00:00 GMT; Path=/; Secure; HttpOnly
* Added cookie LtpaToken2_8443="""" for domain 10.1.1.2, path /, expire 786297600
< set-cookie: LtpaToken2_8443=""; Expires=Thu, 01 Dec 1994 16:00:00 GMT; Path=/; Secure; HttpOnly
* Added cookie LtpaToken2_8443="""" for domain 10.1.1.2, path /, expire 786297600
< set-cookie: LtpaToken2_8443=""; Expires=Thu, 01 Dec 1994 16:00:00 GMT; Path=/; Secure; HttpOnly
...
{"error": [{
"msgId": "MQWB0112E",
"action": "Login to the REST API to obtain a valid authentication cookie.",
"completionCode": 0,
"reasonCode": 0,
"type": "rest",
"message": "MQWB0112E: The 'LtpaToken2_8443' authentication token cookie failed verification.",
"explanation": "The REST API request cannot be completed because the authentication token failed verification."

Where MQ uses the token name LtpaToken2_${httpsPort}, and so has the https port in it.

Because the Ltpa token had expired, a null token was stored in the cookie.

MQ returned a message giving an explanation. Each product will be different.

Tracing LTPA2

Using a LTPA2 token.

I had problems with LTPA2 – I thought an LTPA2 token would expire – but it continued to work.
A Liberty trace with

com.ibm.ws.security.token.ltpa.internal.LTPAToken2=finest

gave me a lot more data which was not useful – but it did give me

LTPAToken2 3 Current time = Wed Aug 27 17:21:31 GMT 2025, expiration time = Wed Aug 27 19:16:23 GMT 2025

This shows the LTPA was valid from 17:21 to 19:16 or 115 minutes remaining.

Logging on and creating a LTPA token

With

com.ibm.ws.security.token.ltpa.internal.LTPAToken2=finer

I got lots of data including

00000055 LTPAToken2    >  <init> Entry 
user:ADCDPL/COLIN
120

Which shows the userid, and the expiry interval in minutes

When the LTPA had expired

Current time = Thu Aug 28 06:51:35 GMT 2025, expiration time = Wed Aug 27 19:42:20 GMT 2025  
The token has expired: current time = Thu Aug 28 06:51:35 GMT 2025, expire time = Wed Aug 27 19:42:20 GMT 2025.

My WLM definitions were not behaving as I expected.

I had configured WLM so the MQ started tasks (CSQ*) were defined as a low priority STC.

  Subsystem-Type  Xref  Notes  Options  Help                              
--------------------------------------------------------------------------
Modify Rules for the Subsystem Type Row 22 to 25 of 25
Command ===> ___________________________________________ Scroll ===> CSR

Subsystem Type . : STC Fold qualifier names? Y (Y or N)
Description . . . All Started Tasks

Action codes: A=After C=Copy M=Move I=Insert rule
B=Before D=Delete row R=Repeat IS=Insert Sub-rule
More ===>
-------Qualifier-------- -------Class--------
Action Type Name Start Service Report
DEFAULTS: STCLOM ________
____ 1 TN CSQ9WEB ___ STCLOM MQ
____ 1 TN CSQ9CHIN ___ STCLOM MQ
____ 1 TN CSQ9ANG ___ STCLOM MQ

But I could see from SDSF, that the CSQ9CHIN’s SrvClass was STCHIM, and CSQ9WEB’s was STCHIM. It took me a couple of hours digging to find out why.

Higher up the list, the WLM definitions had

         -------Qualifier--------                 -------Class--------    
Action Type Name Start Service Report
DEFAULTS: STCLOM ________
____ 1 TN %MASTER% ___ SYSTEM MASTER
____ 1 SPM SYSTEM ___ SYSTEM ________
____ 1 SPM SYSSTC ___ SYSSTC ________
____ 1 TNG STCHI ___ SYSSTC ________
____ 1 TNG STCMD ___ STCMDM ________
____ 1 TNG MONITORS ___ ________ MONITORS
____ 1 TNG SERVERS ___ STCMDM ________
____ 1 TNG ONLPRD ___ STCHIM ________

There is a definition for ONLPRD (online production), a group of transaction names (Transaction Name Group).

From option 5 Classification Groups, of the main WLM panel it displays

                         Classification Group Menu                        
Select one of the following options.
__ 1. Accounting Information Groups 14. Plan Name Groups
2. Client Accounting Info Groups 15. Procedure Name Groups
3. Client IP Address Groups 16. Process Name Groups
4. Client Transaction Name Groups 17. Scheduling Environment Groups
5. Client Userid Groups 18. Subsystem Collection Groups
6. Client Workstation Name Groups 19. Subsystem Instance Groups
7. Collection Name Groups 20. Subsystem Parameter Groups
8. Connection Type Groups 21. Sysplex Name Groups
9. Correlation Information Groups 22. System Name Groups
10. LU Name Groups 23. Transaction Class Groups
11. Net ID Groups 24. Transaction Name Groups
12. Package Name Groups 25. Userid Groups
13. Perform Groups 26. Container Qualifier Groups

Most of these had no definition, but option 24. Transaction Name Groups gave me

                           Group Selection List                Row 1 to 5 of 5
Command ===> ____________________________________________________________

Qualifier type . . . . . . . : Transaction Name

Action Codes: 1=Create, 2=Copy, 3=Modify, 4=Browse, 5=Print, 6=Delete,
/=Menu Bar
-- Last Change ---
Action Name Description User Date
__ MONITORS Online System Activity monitors TODD 1999/11/16
__ ONLPRD Online Production Subsystems IBMUSER 2023/01/10
__ SERVERS Server Address Spaces TODD 1999/11/16
__ STCHI High STC's TODD 1999/11/16
__ STCMD Medium STC's TODD 1999/11/16

and these names match what is in the classification rules section above.

Option 3 to modify ONLPRD, gave

                              Modify a Group                   Row 1 to 8 of 8
Command ===> ____________________________________________________________

Enter or change the following information:

Qualifier type . . . . . . . : Transaction Name
Group name . . . . . . . . . : ONLPRD
Description . . . . . . . . . Online Production Subsystems
Fold qualifier names? . . . . Y (Y or N)

Qualifier Name Start Description
%%%%DBM1 ___ DB2 Subsystems
%%%%MSTR ___ DB2 Subsystems
%%%%DIST ___ DB2 Subsystems
%%%%SPAS ___ DB2 Subsystems
CICS* ___ CICS Online Systems
IMS* ___ IMS Online Systems
CSQ* ___ MQ Series

and we can see that MQ started tasks starting with CSQ are in this group.

As this definition is higher in the classification rules list – it will take precedence over any definitions I had defined lower down.

Because there was a definition (within the Started classification)

____  1 TNG       ONLPRD   ___                    STCHIM      ________   

Started tasks in the group ONLPRD are classified as STCHIM, and so this explains why the classification of the MQ address spaces were “wrong”.

I had several options

  • Change the groups and put MQ in its own group with STCLOM
  • Move my CSQ9* specific definitions above the group.

What’s the best way of connecting to an HTTPS server. Pass ticket or JWT?

This blog post was written as background to some blog posts on Zowe API-ML. It provides back ground knowledge for HTTPS servers running on z/OS, and I think it is useful on its own. Ive written about an MQWEB server – because I have configured this on my system.

The problem

I want to manage my z/OS queue manager from my Linux machine.I have several ways of doing it.

Which architecture?

  • Use an MQ client. Establish a client connect to the CHINIT, and use MQPUT and MQGET administration messages to the queue manager.
    • You can issue a command string, and get back a response string which you then have to parse
    • You can issue an MQINQ API request to programmatically query attributes, and get the values back in fields. No parsing, but you have to write a program to do the work.
  • Use the REST API. This is an HTTP request in a standard format into the MQWEB server.
    • You can issue a command string, and get back a response string which you then have to parse to extract the values.
    • You can issue a JSON object where the request is encoded in a URL, and get the response back in JSON format. It is trivial to extract individual fields from the returned data.

Connecting to the MQWEB server

If you use REST (over HTTPS) there are several ways of doing this

  • You can connect using userid and password. It may be OK to enter your password when you are at the keyboard, but not if you are using scripts and you may be away from your keyboard. If hackers get hold of the password, they have weeks to use it, before the password expires. You want to give your password once per session, not for every request.
  • You can connect using certificates, without specifying userid and password.
    • It needs a bit of set up at the server to map your certificate to a userid.
    • It takes some work to set up how to revoke your access, if you leave the company, or the certificate is compromised.
    • Your private key could be copied and used by hackers. There is discussion about reducing the validity period from over a year to 47 days. For some people this is still too long! You can have your private certificate on a dongle which you have to present when connecting to a back end. This reduces the risk of hackers using your private key.
  • You can connect with a both certificate and userid and password. The certificate is used to establish the TLS session, and the userid and password are used to logon to the application.
  • You can use a pass ticket. You issue a z/OS service which, if authorised, generates a one time password valid for 10 minutes or less. If hackers get hold of the pass ticket, they do not have long to be able to exploit it. The application generating the pass ticket, does not need the password of the userid, because the application has been set up as trusted.
  • You can use a JSON Web Token (JWT). This has some similarities with certificates. In the payload is a userid value and issuer value . I think of issuer as the domain the JWT has come from – it could be TEST or a company name. From the issuer value, and IP address range, you configure the server to specify a realm value. From the userid and realm you can map this to a userid on the server. This JWT can be valid from minutes to many hours (but under a day). The userid and realm mapping to a userid is different to certificate mapping to a userid.

Setting up a pass ticket

The passticket is used within the sysplex. It cannot be used outside of a sysplex. The pass ticket is a password – so needs to be validated against the RACF database.

The application that generates the pass ticket must be authorised to a profile for the application. For example, define the profile for the application TSO on system S0W1, the profile is TSOS0W1.

 RDEFINE PTKTDATA TSOS0W1 

and a profile to allow a userid to create a pass ticket for the application

RDEFINE PTKTDATA   IRRPTAUTH.TSOS0W1.*  UACC(NONE) 

PERMIT IRRPTAUTH.TSOS0W1.* CLASS(PTKTDATA) ID(COLIN) ACCESS(UPDATE)
PERMIT IRRPTAUTH.TSOS0W1.* CLASS(PTKTDATA) ID(IBMUSER)ACCESS(UPDATE)

Userids COLIN and IBMUSER can issue the callable service IRRSPK00 to generate a pass ticket for a user for the application TSOS0W1.

The output is a one-use password which has a validity of up to 10 minutes.

As an example, you could configure your MQWEB server to use profile name MQWEB, or CSQ9WEB.

How is it used

A typical scenario is for an application running on a work station to issue a request to an “application” on z/OS, like z/OSMF, to generate a pass ticket for a userid and application name.

The client on the work station then issues a request to the back end server, with the userid and pass ticket. If the back end server matches the application name then the pass ticket will be accepted as a password. The logon will fail if a different application is used, so a pass ticket for TSO cannot be used for MQWEB.
This is more secure than sending a userid and password up with every back end request, but there is additional work in creating the pass ticket, and two network flows.

This solution scales because very little work needs to be done on the work station, and there is some one-off work for the setup to generate the pass tickets.

JSON Web Tokens

See What are JSON Web Tokens and how do they work?

The JWT sent from the client has an expiry time. This can be from seconds to hours. I think it should be less than a day – perhaps a couple of hours at most. If a hacker has a copy of the JWT, they can use it until it expires.

The back end server needs to authenticate the token. It could do this by having a copy of the public certificate in the server’s keyring, or send a request down to the originator to validate it.

If validation is being done with public certificates, because the client’s private key is used to generate the JWT, the server needs a copy of the public certificate in the server’s keyring. This can make it hard to manage if there are many clients.

The Liberty web server has definitions like

<openidConnectClient id="RSCOOKIE" 
clientId="COLINCOO2"
realmName="zOSMF"
inboundPropagation="supported"
issuerIdentifier="zOSMF"
mapIdentityToRegistryUser="false"
signatureAlgorithm="RS384"
trustAliasName="CONN1.IZUDFLT"
trustStoreRef="defaultKeyStore"
userIdentifier="sub"
>
<authFilter id="afint">
<remoteAddress id="myAddress" ip="10.1.0.2" matchType="equals" />
</authFilter >

</openidConnectClient>

For this entry to be used various parameters need to match

  • The issuerIdentifier. This string identifies the client. It could be MEGABANK, TEST, or another string of your choice. It has to match what is in the JWT.
  • signatureAlgorithm. This matches the incoming JWT.
  • trustAliasName and trustStoreRef. These identify the certificate used to validate the certificate
  • remoteAddress. This is the address, or address range of the client’s IP addresses.

If you have 1000 client machines, you may need 1000 <openidConnectClient…/> definitions, because of the different certificate and IP addresses.

You may need 1000 entries in the RACMAP mapping of userid + realm to userid to be used on the server.

How is it used

You generate the JWT. There are different ways of doing this.

  • Use a service like z/OSMF
  • Use a service on your work station. I have used Python to do this. The program is 30 lines long and uses the Python jwt package

You get back a long string. You can see what is in the string by pasting the JWT in to jwt.io.
You pass this to the backend as a cookie. The cookie name depends on what the server is expecting. For example

'Authorization': "Bearer " + token

The JWT has limited access

For the server to use the JWT, it needs definitions to recognise it. If you have two back end servers

  • Both servers could be configured to accept the JWT
    • If the server specified a different REALM, then the mapped userid from the JWT could be different for each server because the userid/realm to userid mapping can be different.
  • One server is configured to accept the JWT
    • If only one server has the definitions for the JWT, then trying to use the JWT to logon to another server will fail.

Tracing input and output of the Liberty web server.

The Liberty web server is used by many IBM products on z/OS, for example z/OSMF, MQSeries and z/OSConnect (but not Zowe).

When using Zowe, I struggled finding out what data was input to the server. As usual, when you have found the answer it is easy.

Once it worked, I had

<httpAccessLogging id="accessLogging" 
logFormat="%a %s ET=%D %r i=%i c=%C "
enabled="true"
/>
<httpEndpoint id="defaultHttpEndpoint"
accessLoggingRef="accessLogging"
httpPort="9080"
httpsPort="10443"
/>

Where the httpEndpoint defines the port 10443 , and references httpAccessLogging.

It one point I had two ports defined for https. I separated the output for each port using

filepath="${server.output.dir}/logs/http_10443_access.log" 

within the httpAccessLogging definition, to output the data to a specific file to match the port.

What data is output?

You can control what data is output. I used logFormat to output what I was interested in.

logFormat="%a %s ET=%D %r i=%i c=%C " 

Where

  • %a is the remote IP address 10.1.0.2
  • %s is the status – if it worked the value is 200.
  • ET=%D. This is the duration of the request in microseconds. It appears as ET=667601
  • %r the first line of the request POST /ibmmq/rest/v1/admin/action/qmgr/CSQ9/mqsc HTTP/1.
  • i=%i the header name from the request. My request did not have one so this comes out as i=-
  • c=%C gives the cookies. You can request a specific cookie. My output had c=jwtToken:eyJraWQiOiJhYVBkRzd5N…. which is the JSON Web Token. To see the contents, I took this token, and pasted it into http:jwt.io.

You can ask for the datetime, but this comes out as a long string with year,month, day hh:mm:ss.uuuuuu. I found the year month and day were not needed, but I could not find how to display just the time.

The output from the above format was

10.1.0.2 200 ET=667601 POST /ibmmq/rest/v1/admin/action/qmgr/CSQ9/mqsc HTTP/1.1 i=- c=jwtToken:eyJraWQiOiJhYVBkRzd5NmZETk5UT....

I can’t automatically allocate a data set, and my SMS set up is not helping.

I’m running my little zD&T z/OS system on my laptop. I am the only person on this system, so I have to do every thing myself.

I started my MQ system last week, and now it is complaining that it cannot allocate archive logs. From my experience with MQ, I know this is serious. I know I have lots of space on my disks, so why can’t MQ use it.
I’ll go through the diagnostic path I took, which shows the SMS commands I used, and give the solution.

The blog post One minute SMS covers many of the concepts (and commands used).

The error messages

CSQJ072E %CSQ9 ARCHIVE LOG DATA SET 'CSQARC2.CSQ9.B0000002' HAS BEEN ALLOCATED TO NON-TAPE DEVICE AND CATALOGUED, OVERRIDING CATALOG PARAMETER                                    
IGD17272I VOLUME SELECTION HAS FAILED FOR INSUFFICIENT SPACE FOR DATA SET CSQARC2.CSQ9.A0000002 JOBNAME (CSQ9MSTR) STEPNAME (CSQ9MSTR) PROGNAME (CSQYASCP)
REQUESTED SPACE QUANTITY = 120960 KB
STORCLAS (SCMQS) MGMTCLAS ( ) DATACLAS ( )
STORGRPS (SGMQS SGBASE SGEXTEAV )
IKJ56893I DATA SET CSQARC2.CSQ9.A0000002 NOT ALLOCATED+
IGD17273I ALLOCATION HAS FAILED FOR ALL VOLUMES SELECTED FOR DATA SET
CSQARC2.CSQ9.A0000002
IGD17277I THERE ARE (247) CANDIDATE VOLUMES OF WHICH (7) ARE ENABLED OR
QUIESCED
IGD17290I THERE WERE 3 CANDIDATE STORAGE GROUPS OF WHICH THE FIRST 3 814
WERE ELIGIBLE FOR VOLUME SELECTION.
THE CANDIDATE STORAGE GROUPS WERE:SGMQS SGBASE SGEXTEAV
IGD17279I 240 VOLUMES WERE REJECTED BECAUSE THEY WERE NOT ONLINE
IGD17279I 240 VOLUMES WERE REJECTED BECAUSE THE UCB WAS NOT AVAILABLE
IGD17279I 7 VOLUMES WERE REJECTED BECAUSE THEY DID NOT HAVE SUFFICIENT
SPACE (041A041D)

Why is it using the storage class SCMQS?

From the ISMF panels,

  • option 7 Automatic Class Selection
  • option 5 Display – Display ACS Object Information

Gives a panel

   Panel  Utilities  Help                                                       
──────────────────────────────────────────────────────────────────────────────
ACS OBJECT DISPLAY
Command ===>

CDS Name : ACTIVE

ACS Rtn Source Data Set ACS Member Last Trans Last Date Last Time
Type Routine Translated from Name Userid Translated Translated
-------- ----------------------- -------- ---------- ---------- ----------
DATACLAS SYS1.S0W1.DFSMS.CNTL DATACLAS IBMUSER 2019/12/17 15:21
MGMTCLAS ----------------------- -------- -------- ---------- -----
STORCLAS SYS1.S0W1.DFSMS.CNTL STORCLAS IBMUSER 2020/12/02 11:23
STORGRP SYS1.S0W1.DFSMS.CNTL STORGRP IBMUSER 2019/12/17 15:23

So the ACS routine is in SYS1.S0W1.DFSMS.CNTL(STORCLAS)

This file has

PROC STORCLAS 
FILTLIST MQS_HLQ INCLUDE(CSQ*.**,
CSQ.**,
MQS.**,
MQS*.**)
...
SELECT
...
WHEN (&DSN = &MQS_HLQ)
DO
SET &STORCLAS = 'SCMQS'
EXIT CODE(0)
END
...
END
END

This says for any data set name (&DSN) that match the list (&MQS_HLQ) whic has CSQ* or MQS*, then set the Storage class to ‘SCMQS’

What storage groups are connected with the MQ data set?

Member SYS1.S0W1.DFSMS.CNTL(STORGRP) has

...
WHEN (&STORCLAS= 'SCMQS')
DO
SET &STORGRP = 'SGMQS','SGBASE','SGEXTEAV'
EXIT CODE(0)
END
...

so these are the storage groups that MQ data sets will use.

What DASD volumes are in the storage group?

D SMS,SG(SGbase)                             
IGD002I 13:34:38 DISPLAY SMS 699

STORGRP TYPE SYSTEM= 1
SGBASE POOL +
SPACE INFORMATION:
TOTAL SPACE = 29775MB USAGE% = 98 ALERT% = 0
TRACK-MANAGED SPACE = 29775MB USAGE% = 98 ALERT% = 0

Hows there is 29775 M allocated -and it is 98% full.

D SMS,SG(SGMQS)                                                        
IGD002I 13:31:33 DISPLAY SMS 678

STORGRP TYPE SYSTEM= 1
SGMQS POOL +
SPACE INFORMATION:
NOT AVAILABLE TO BE DISPLAYED
***************************** LEGEND *****************************
. THE STORAGE GROUP OR VOLUME IS NOT DEFINED TO THE SYSTEM
+ THE STORAGE GROUP OR VOLUME IS ENABLED
- THE STORAGE GROUP OR VOLUME IS DISABLED
* THE STORAGE GROUP OR VOLUME IS QUIESCED
D THE STORAGE GROUP OR VOLUME IS DISABLED FOR NEW ALLOCATIONS ONLY
Q THE STORAGE GROUP OR VOLUME IS QUIESCED FOR NEW ALLOCATIONS ONLY
> THE VOLSER IN UCB IS DIFFERENT FROM THE VOLSER IN CONFIGURATION
SYSTEM 1 = S0W1

There are no volumes allocated to this storage group.

What volumes are in the storage group?

D SMS,SG(SGBASE),LISTVOL                                             
IGD002I 13:39:07 DISPLAY SMS 705

STORGRP TYPE SYSTEM= 1
SGBASE POOL +
SPACE INFORMATION:
TOTAL SPACE = 29775MB USAGE% = 98 ALERT% = 0
TRACK-MANAGED SPACE = 29775MB USAGE% = 98 ALERT% = 0

VOLUME UNIT MVS SYSTEM= 1 STORGRP NAME
B3USR1 0ADA ONRW + SGBASE
USER0A + SGBASE
USER0B + SGBASE
USER0C + SGBASE
USER0D + SGBASE
USER0E + SGBASE
USER0F + SGBASE
USER00 0A9C ONRW + SGBASE
USER01 + SGBASE
USER02 0AB0 ONRW + SGBASE
USER03 0ACE ONRW + SGBASE
USER04 0AB2 ONRW + SGBASE
USER05 0AB5 ONRW + SGBASE
USER06 0A83 ONRW + SGBASE
...
+ THE STORAGE GROUP OR VOLUME IS ENABLED

How do I see how much space is available in my disks?

ISMF,

  • option 2 – Volume
  • option 1 – DASD

This gives a panel

                          VOLUME SELECTION ENTRY PANEL              Page 1 of 3
Command ===>

Select Source to Generate Volume List . . 2 (1 - Saved list, 2 - New list)
1 Generate from a Saved List Query Name To
List Name . . COLIN Save or Retrieve
2 Generate a New List from Criteria Below
Specify Source of the New List . . 1 (1 - Physical, 2 - SMS)
Optionally Specify One or More:
Enter "/" to select option Generate Exclusive list
Type of Volume List . . . 1 (1-Online,2-Not Online,3-Either)
Volume Serial Number . . USER* (fully or partially specified)
Device Type . . . . . . . (fully or partially specified)
Device Number . . . . . . (fully specified)
To Device Number . . . (for range of devices)
Acquire Physical Data . . Y (Y or N)
Acquire Space Data . . . Y (Y or N)
Storage Group Name . . . (fully or partially specified)
CDS Name . . . . . . .
(fully specified or 'Active')
Use ENTER to Perform Selection; Use DOWN Command to View next Selection Panel;
Use HELP Command for Help; Use END Command to Exit.

or

        Enter "/" to select option      Generate Exclusive list                 
Type of Volume List . . . 1 (1-Online,2-Not Online,3-Either)
Volume Serial Number . . * (fully or partially specified)
Device Type . . . . . . . (fully or partially specified)
Device Number . . . . . . (fully specified)
To Device Number . . . (for range of devices)
Acquire Physical Data . . Y (Y or N)
Acquire Space Data . . . Y (Y or N)
Storage Group Name . . . SGBASE (fully or partially specified)
CDS Name . . . . . . . 'ACTIVE'
(fully specified or 'Active')

You can specify a Volume Serial prefix, a Storage Group Name, or a combination of both.

You need to select Acquire Physical Data, and Acquire Space Data.

You get output like

 LINE       VOLUME FREE       %     ALLOC      FRAG   LARGEST    FREE     
OPERATOR SERIAL SPACE FREE SPACE INDEX EXTENT EXTENTS ... ...
---(1)---- -(2)-- ---(3)--- (4)- ---(5)--- -(6)- ---(7)--- --(8)--
B3USR1 149186K 2 8165315K 375 34032K 36
USER00 67067K 1 8247434K 718 2490K 133
USER02 30601K 1 2740899K 412 11621K 31
USER03 3209K 0 2768291K 333 2213K 6
USER04 146198K 5 2625302K 280 42332K 19
USER05 64466K 2 2707034K 9 63802K 3
USER06 273304K 10 2498196K 177 105581K 14

Which shows I do not have much free space.

Add more space

As it looks like my storage group pools are low on disk space, I need to allocate more volumes.

See Adding more disk space to z/OS, creating volumes and adding them to SMS.

Once I added the volume to the SGBASE storage group, it usage went from

TOTAL SPACE = 29775MB USAGE% = 98 ALERT% = 0                      
TRACK-MANAGED SPACE = 29775MB USAGE% = 98 ALERT% = 0

to

TOTAL SPACE = 32482MB USAGE% = 89 ALERT% = 0                      
TRACK-MANAGED SPACE = 32482MB USAGE% = 89 ALERT% = 0

MQWEB: I want to trace the https requests

See HTTP access logging.

When I had the following in my mqwebuser.xml file,

  <httpAccessLogging id="accessLogging"/> 
<httpEndpoint id="defaultHttpEndpoint" httpsPort="9443">
<accessLogging filepath="${server.output.dir}/logs/http_defaultEndpoint_access.log"/>
</httpEndpoint>

it gave me a record like

10.1.0.2 IBMUSER ∇10/Aug/2025:17:12:25 +0000∆ "GET /ibmmq/rest/v1/admin/action/qmgr/CSQ9/mqsc HTTP/1.1" 405

so I could see the HTTP code (405) from my request.

I got the HTTP 405 code because I specified type GET instead of type POST! Another of those obvious once you see it problems.

Configure the mqweb server to accept JWT.

See my blog post JWT to learn what JSON Web Token are, and how they work.

In Liberty the JWT is processed by OpenID Connect Client.

In my mqwebuser.xml I had

<featureManager> 
<feature>transportSecurity-1.0</feature>
<feature>openidConnectClient-1.0</feature>
</featureManager>
<openidConnectClient
id="RS2"
clientId="COLINSOC"
jwkEndpointUrl="https://10.1.1.2:10443/jwt/ibm/api/zOSMFBuilder/jwk"
inboundPropagation="supported"
issuerIdentifier="zOSMF"
mapIdentityToRegistryUser="true"
signatureAlgorithm="RS384"
trustAliasName="CONN2.IZUDFLT"
trustStoreRef="defaultKeyStore"
userIdentifier="sub"
/>

Where

  • id=”RS2″ is any label
  • clientId=”COLINSOC” is another label
  • jwkEndpointUrl=”https://10.1.1.2:10443/jwt/ibm/api/zOSMFBuilder/jwk&#8221;
  • inboundPropagation=”required”
  • issuerIdentifier=”zOSMF” this matches the iss-uer in the JWT token
  • mapIdentityToRegistryUser=”false” this is to use the Liberty userid mapping so specify false to use the RACF mapping.
  • signatureAlgorithm=”RS384″ – this has to match what is in the task that creates the JWT. If I had RS256 – it came out as Elliptic Curve. I configued z/OSMF and MQWEB both to use RS384 and it worked.
  • trustAliasName=”CONN2.IZUDFLT” the certificate to use in the validation
  • trustStoreRef=”defaultKeyStore” this point to the definition of the trust keystore to use
  • userIdentifier=”sub” the user name is taken from this field in the JWT

The JWT had

header

{
"kid": "aaPdG7y6fDNNTMCT6wb9-Oe21M63dPS3MtCeF7kYKn8",
"typ": "JWT",
"alg": "RS384"
}

The payload had

token_type:Bearer
sub: IBMUSER The subject of the JWT (the user).
upn: IBMUSER
groups["DBBADMNS","IZUADMIN","IZUUSER","PKIGRP","SYS1","ZWEADMIN"]
realm:SAFRealm
iss:zOSMF The issuer of the JWT.
exp: 1754240783 (Sun Aug 03 2025 18:06:23 GMT+0100 (British Summer Time)). The expiration time on or after which the JWT MUST NOT be accepted for processing. Learn more
iat:1754237783 (Sun Aug 03 2025 17:16:23 GMT+0100 (British Summer Time)) The time at which the JWT was issued.

Getting it to work

I used Setting mqweb trace on z/OS and other useful hints on tracing extensively.

Using JWT and when it goes wrong has some debugging hints.

Processing lines in ASCII files in ISPF edit macros made looking at log files so much easier, by displaying lines from a trace file on one screen – rather than having to scroll sideways many times.

Using JWT and when it goes wrong

General

In the Liberty traces, I tended to look for the last few CWW…. messages.

Processing lines in ASCII files in ISPF edit macros made looking at log files so much easier.

Tracing the openidConnectClient activity

You can use the trace

com.ibm.ws.security.*=all:com.ibm.ws.webcontainer.security.*=all:com.ibm.oauth.*=all:com.ibm.wsspi.security.oauth20.*=all:org.openid4java.*=all:org.apache.http.client.*=all:io.openliberty.security.*=all

to get a lot of information about the activity.

  • com.ibm.oauth.*=all didnt give me anything.
  • com.ibm.ws.webcontainer.security.*=fine didn’t produce anything
  • com.ibm.ws.webcontainer.security.*=finer produced good stuff – too much info

I used Setting mqweb trace on z/OS and other useful hints on tracing extensively to look at the Liberty traces.

Messages

CWWKS1776E: Validation failed for the token requested by (COLINCOO2) using the (RS384) algorithm due to a signature verification failure:

CWWKS1737E: The OpenID Connect client (COLINCOO2) failed to validate the JSON Web Token. The cause of the error was: (JWT rejected due to invalid signature).
After I added the certificate to the keyring, I needed to restart the server to pickup the change.

CWWKS2915E: SAF service IRRSIA00_CREATE did not succeed because group
null was not found in the SAF registry. SAF return code 0x00000008. RACF return code 0x00000008. RACF reason code 0x00000010.

Explanation: The JWT has a userid, and the userid/realm mapping does not exist in the RACMAP definitions. I think this is a bug… it should not have got into RRSIA00_CREATE if there is no userid.

Basic configuration errors

When there was no matching issuerIdentifier in the openidConnectClient, I got

HTTP/2 401
www-authenticate: Bearer realm=”jwt”, error=”invalid_token”, error_description=”Check JWT token”

{“error_description”:”OpenID Connect client returned with status: SEND_401″,”error”:401}

With the above I got in the trace

… Jose4jUtil E CWWKS1737E: The OpenID Connect client (…) failed to validate the JSON Web Token . The cause of the error was: (
CWWKS1773E: Validation failed for the token requested by the (…) OpenID Connect client for the (…) user because the token is outside of its valid range. This error occurs either because the (2025-08-08T18:45:15.182Z) current time is after the (2025-08-08T18:03:21.000Z) token expiration time or because the (2025-08-08T17:13:21.000Z) issue time is too far away from the (2025-08-08T18:45:15.182Z) current time.)

Which means the token has expired.

Using a Python script to access MQWEB with JSON Web Tokens

See JWT for my blog post on what JWT are and how they work.

I also gave myself the additional challenge of not saving sensitive information in disk files.

Once I had got the basics working using a Bash script, I used Python as a proper solution, because I could capture the information from the requests much easier.

Overall application

My overall application is

Python

Get the JWT

#!/usr/bin/env python3
from timeit import default_timer as timer
import ssl

#import time
#import base64
#import json
import sys
from http.client import HTTPConnection # py3
import requests
import urllib3
# trace the traffic flow
HTTPConnection.debuglevel = 1

my_header = { 'Accept' : 'application/json' }

urllib3.disable_warnings()

geturl = "https://10.1.1.2:10443/zosmf/services/authenticate"

context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT)

certificate="colinpaice.pem"
key="colinpaice.key.pem"
cpcert=(certificate,key)

jar = requests.cookies.RequestsCookieJar()

caCert='./doczosca.pem'

s = requests.Session()
res = s.post(geturl,headers=my_header,cookies=jar,cert=cpcert,verify=caCert)

if res.status_code != 200:
print(res.status_code)
#headers = res.headers
#print("Header",type(headers))
#for h in headers:
# print(h,headers[h])

cookies = res.cookies.get_dict()
token=""
for c in cookies:
print("cookie",c,cookies[c])
if c == "jwtToken":
token = cookies[c]

if token == "" :
print("No jwtToken cookie returned ")
sys.exit(8)

Issue the MQ command

print("===========NOW DO MQ ==============")
mqurl="https://10.1.1.2:9443/ibmmq/rest/v1/admin/action/qmgr/CSQ9/mqsc"
tok = "Bearer " + token
mq_header = {
'Accept' : 'application/json',
'Authorization' : tok,
'Content-Type': 'application/json',
'ibm-mq-rest-csrf-token' : ''
}

data={"type": "runCommand",
"parameters": {"command": "DIS QMGR ALL"}}

mqres = s.post(mqurl,headers=mq_header,cookies=jar,verify=False,json=data)

print("==MQRES",mqres)
print("mqheader",mqres.headers )
print("mqtext",mqres.text)

sys.exit(0)

Notes:

  • The authorisation token is created by “Beader ” concatenated from the jwtToken value.
  • The data is created as json. {“type”: “runCommand”,….}. It needs header ‘Content-Type’: ‘application/json’,

Using a Bash script to access MQWEB with JSON Web Tokens

See JWT for my blog post on what JWT are and how they work.

I also gave myself the additional challenge of not saving sensitive information in disk files.

Once I got the scripts working I used a Python script- which was much easier to use.

Overall application

My overall application is

BASH

I initially tried using a BASH script for creating and using JWT to issue MQ REST API requests to MQWEB.

This worked, but capturing the JWT from the cookie was not easy to implement.

Get the JWT

#!/bin/bash
rm cookie.jar.txt

url="https://10.1.1.2:10443/zosmf/services/authenticate"
tls="--cacert doczosca.pem --tlsv1.2 --tls-max 1.2"
certs=" --cert ./colinpaice.pem:password --key ./colinpaice.key.pem"
insecure="--insecure"
cj="--cookie cookie.jar.txt --cookie-jar cookie.jar.txt"

curl -v $cj $tls $certs $url $insecure

Note: If there was a valid JWT in the cookie store, the code did not return a JWT. I deleted the cookie file to get round this.

Issue the MQ command

#!/bin/bash
set -x

url="https://10.1.1.2:9443/ibmmq/rest/v1/admin/action/qmgr/CSQ9/mqsc"

token="..."

tls="--cacert ./doczosca.pem --tlsv1.2"
certca="--cacert ./doczosca.pem "

origin="-H Origin:"
post="-X POST"
# need --insecure to avoid subjectAltName does not match
insecure="--insecure"

cj="--cookie cookie.jar.txt --cookie-jar cookie.jar.txt"

curl --verbose -H "Authorization: Bearer $token" -H "Connection: close" $cj $header $insecure $verify $tls -H "Content-Type: application/json" -H "ibm-mq-rest-csrf-token: value" $certs $trace $url --data "{ \"type\": \"runCommand\", \"parameters\": {\"command\": \"DIS QMGR ALL\"} }"

I used cut and paste to copy the JWT from the output of the CURL z/OSMF request, and paste it in token=”” in the MQ script.
I did this because my BASH scripting was not up trying to getting the JWT from the z/OSMF script.