Familiarity makes you blind to ugliness.

I’ve been using z/OS in some for over 40 years and I do not see many major problems with it (some other people may – but it cannot be very many).   It is the same with MQ on z/OS – but since I left IBM, and did not use MQ on z/OS for a year or so,  and I am now coming to it as a customer, I do see a few things that could be improved. When these products were developed, people were grateful for any solution which worked, rather than an elegant solution which did not work.  In general people do not like to be told their baby is ugly.

I have been programming in Python, and gone back to C to extract certificate information from RACF, and C looks really ugly in comparison.  It may be lack of vision from the people who managed the C language, or lack of vision from the people who set standards, but the result is ugly.

My little problem (well, one of them)

I have decoded a string returned from RACF and it is in an ugly structure which uses

 typedef struct _attribute {     
oid identifier;
buffer * pValue ;
int lName
} x509_rdn_attribute;

typedef enum {
unknown = 0,
name = 1,
surname = 2,
givenName = 3,
initials = 4,
commonName = 6
....

} oid ;

My problem is that I have an oid identifier of 6 – how to I get map this to the string “commonName”.

I could use a big switch statement, but it means I have to preprocess the list, or type it all in.  I then need to keep an eye on the provided definitions, and update my file if it changes.  The product could provide a function to do the work for me.  A good start – but perhaps the wrong answer.

If I ruled the world…

As well as “Every day would be the first day of Spring”  I would have the C language provide a function for each enum so mune_oid(2) returns “surname”;

I wrote some code to interpret distributed MQ trace, and I had to do this reverse mapping for many field types.  In the end, I wrote some Python script which took the CMQC.h header file and transformed each enum into a function which did the switch, and return the symbolic name from the number.

I came up with the idea of using definitions like colin.h with clever macros,

TYPEDEF_ENUM (_attribute)
  ENUM(unknown,0,descriptive comment);
ENUM(name,1,persons name);
...
END_TYPEDEF_ENUM(OID)

For normal usage I would define macros

#if ndef TYPEDEF_ENUM
#define  TYPEDEF_ENUM (a)  typedef struct a {
#define  ENUM(a,b,c)  a=b,/* c*/
#define END_TYPEDEF_ENUM(a)  } a;
#endif
#include <colin.h>

For the lookup processing the macros could be

#define  TYPEDEF_ENUM (a)  char * lookup_a(int  in);switch(in){
#define  ENUM(a,b,c)  case b return #a; /* c */
#define END_TYPEDEF_ENUM(a)  } return "Unknown";
#include <colin.h>

but this solution means I have to include colin.h more than once, which may cause duplicate definitions.

My solution looks uglier than the problem, so I think I’ll just stick to my Python solution and creating a mune… function.

 

Here’s another nice mess I’ve gotten into!

Or “How to clean up the master catalog when you have filled it up with junk”. Looking at my z/OS system,  I was reminded of my grandfathers garage/workshop where the tools were carefully hung up on walls, the chisels were carefully stored a a cupboard to keep them sharp etc.   He had boxes of screws, different sizes and types in different boxes.   My father had a shed with a big box of tools.  In the box were his chisels, hammers, saws etc..  He had a big jar of “Screws – miscellaneous – to be sorted”.    The z/OS master catalog should be like my grandfather’s garage, but I had made it like my father’s shed.

Well, what a mess I found!   This blog post describes some of the things I had to do to clean it up and follow best practices.

In days of old, well before PCs were invented, all data sets were cataloged in the master catalog.  Once you got 10s of thousands of data sets on z/OS, the time to search the catalog for a dataset name increased, and the catalogs got larger and harder to manage.  They solved this about 40 years ago by developing the User Catalog  – a catalog for User entries.
Instead of 10,000 entries for my COLIN.* data sets, there should be an Alias COLIN in the Master Catalog which points to another catalog which could be just for me, or can be shared by other users. This means that even if I have 1 million datasets in the user catalog, the access time for system datasets is not affected.  What I expected to see in the master catalog is the system datasets, and aliases for the userids.  I had over 400 entries for COLIN.* datasets, 500 BACKUP.COLIN.* datasets, 2000, MQ.ARCHIVE.* datasets etc.  What a mess!

Steps to solve this.

Prevention is better than cure.

You can use the RACF Option PROTECTALL.  This says a userid needs a RACF profile before it can create a dataset.  This means each userid (and group) needs a profile  like ‘COLIN.*’, and give the userid access to this profile.  Once you have done this for all userids, you can use the RACF command SETROPTS PROTECTALL(WARNING) to enable this support.   This will allow users to create datasets, when there is no profile, but produces a warning message on the operator console – so you can fix it.  An authorised person can use SETROPTS NOPROTECTALL to turn this off.  Once you have this running with no warnings you can use the command SETROPTS PROTECTALL to make it live – without warnings, and you will live happily ever after, or at least till the next problem.

Action:

  1. Whenever you create a userid you need to create the RACF dataset profile for the userid.
  2. You also need to set up an ALIAS for the new userid to point to a User Catalog.

How bad is the problem?

You can use IDCAMS to print the contents of a catalog

//S1 EXEC PGM=IDCAMS 
//SYSPRINT DD SYSOUT=*
//SYSIN DD *
LISTCAT CATALOG(CATALOG.Z24A.MASTER) NAME
/*

This has output like

NONVSAM ------- BACKUP.USER.Z24A.VTAMLST.D201210 
NONVSAM ------- BACKUP.USER.Z24A.VTAMLST.D201222
NONVSAM ------- BACKUP.USER.Z24A.VTAMLST.D201224
ALIAS --------- BAQ300

This says there are datasets BACKUP… which should not be in the catalog.
There is an Alias BAQ300 which points to a user catalog.   This is what I expect.

The IDCAMS command

LISTCAT ALIAS CATALOG(CATALOG.Z24A.MASTER) ALL

list all of the aliases in the catalog, for example

ALIAS --------- BAQ300 
... 
ASSOCIATIONS
USERCAT--USERCAT.Z24A.PRODS

This shows for high level qualifier BAQ3000, go and look in the user catalog  USERCAT.Z24A.PRODS.

Moving the entries out of the Master Catalog

The steps to move the COLIN.* entries out of the Master Catalog are

  1. Create a User Catalog
  2. Create an ALIAS COLIN2 which points to this User Catalog. 
  3. Rename COLIN…. to COLIN2….
  4. Create an ALIAS COLIN for all new data sets.
  5. Rename COLIN2… to COLIN…
  6. Delete the ALIAS COLIN2.

Create a user catalog

Use IDCAMS to create a user catalog

 DEFINE USERCATALOG - 
( NAME('A4USR1.ICFCAT') -
MEGABYTES(15 15) -
VOLUME(A4USR1) -
ICFCATALOG -
FREESPACE(10 10) -
STRNO(3 ) ) -
DATA( CONTROLINTERVALSIZE(4096) -
BUFND(4) ) -
INDEX(BUFNI(4) )

To list what is in a user catalog

Use a similar IDCAMS command to list the master catalog 

LISTCAT ALIAS CATALOG(A4USR1.ICFCAT) ALL

Create an alias for COLIN2

 DEFINE ALIAS (NAME(COLIN2) RELATE('A4USR1.ICFCAT') ) 

Get the COLIN.* entries from the Master Catalog into the User Catalog

This was a bit of a challenge as I could not see how to do a global  rename.

You can rename non VSAM dataset either using ISPF 3.4 or use the TSO rename command in batch.

The problem occurs with the VSAM data sets.   When I tried to use the IDCAMS rename, I got an error code IGG0CLE6-122 which says I tried to do a rename, which would cause a change of catalog.

The only way I found of doing it was to copy the datasets to a new High Level Qualifier, and delete the originals.   Fortunately DFDSS has a utility which can do this for you.

//S1 EXEC PGM=ADRDSSU,REGION=0M PARM='TYPRUN=NORUN' 
//SYSPRINT DD SYSOUT=*
//SYSIN DD *
COPY -
DATASET(INCLUDE(COLIN.** )) -
DELETE -
RENUNC(COLIN2)
/*

Most of the data sets were “renamed” to COLIN2… but I had a ZFS which was in use, and some dataset aliases.  I used

  •  the TSO command unmount filesystem(‘COLIN.ZCONNECT.BETA.ZFS’)
  • the  IDCAMS command DELETE COLIN.SCEERUN ALIAS for each of the aliases.

and reran the copy job.   This time it renamed the ZFS.  The renaming steps are

  • Check there are no datasets with the HLQ COLIN.
  • Define an alias for COLIN in the master catalog to point to a user catalog.
  • Rerun the copy job to copy from COLIN2 back to COLIN.
  • Mount the file system.
  • Redefine the alias to data sets (eg COLIN.SCEERUN).
  • Delete the alias for COLIN2.

To be super efficient, and like my grandfather, I could have upgraded the SMS ACS routines to force data sets to have the “correct” storage class, data class, or management class.  The job output showed  “Data set COLIN2.STOQ.CPY has been allocated with newname  COLIN.STOQ.CPY using STORCLAS SCBASE,  no DATACLAS, and no MGMTCLAS“.  These classes were OK for me, but may not be for a multi-user z/OS system.

One last thing, don’t forget to add the new user catalog to your list of datasets to backup.

Should I run MQ for Linux, on z/OS on my Linux?

Yes – this sounds crazy.  Let me break it down.

  1. zPDT is a product from IBM that allows me to run system 390 application on my laptop.  I am running z/OS 2.4, and MQ 9.1.3.   For normal editing and running MQ – it feels as fast as when I had my own real hardware at IBM.
  2. z/OS 2.4 can run docker images in a special z/OS address space called zCX.
  3. I can run MQ distributed in docker.

Stringing these together I can run MQ distributed in a docker environment. The docked environment runs on z/OS.    The z/OS runs runs on my laptop!  To learn more about running this scenario on real z/OS hardware… 

20 years ago, someone was enthusiastically telling me how you could partition distributed servers using a product called VMWare.    They kept saying how good it was, and asking why wasn’t I excited about it.  I said that when I joined IBM – 20 years before the discussion (so 40 years ago), the development platform was multiple VS1 (an early MVS) running on VM/360.  Someone had VM/360 running under VM/360 with VS1 running on that!  Now that was impressive!

Now if only I could get z/OS to run in docker….

Should I Use MQ workflow in z/OSMF or not? Not.

It was a bit windy up here in Orkney, so instead of going out for a blustery walk, I thought I would have a look at the MQ Workflow in z/OSMF; in theory it makes it easier to deploy a queue manager on z/OS. It took a couple of days to get z/OSMF working, a few hours to find there were no instructions on how to use the MQ workflow in z/OSMF, and after a few false starts, I gave up because there were so many problem! I hate giving up on a challenge, but I could not find solutions to the problems I experienced.

I’ve written some instructions below on how I used z/OSMF and the MQ workflow. They may be inaccurate or missing information as I was unable to successfully deploy a queue manager.

I think that the workflow technique could be useful, but not with the current implementation. You could use the MQ stuff as an example of how do do things.

I’ve documented below some of the holes I found.

Overall this approach seems broken.

I came to this subject knowing nothing about it.  May be I should have gone on a two week course to find out how to use it.  Even if I had had education I think it is broken. It may be that better documentation is needed.

For example

  • Rather than simplifying the process, the project owner may have to learn about creating and maintaining the workflows and the auxiliary files the flows use – for example loops in the JCL files.
  • I have an MQ fix in a library, so I have a QFIX library in my steplib.   The generated JCL has just the three SCSQLOAD,SCSQAUTH, SCSQANLE in STEPLIB. I cannot see how to get the QFIX library included. This sort of change should be trivial.
  • If I had upgraded from CSQ913 to CSQ920, I do not see how the workflows get refreshed to point to the new libraries.
  • There are bits missing – setting up the STARTED profile for the queue manager, and specifying the userid.

Documentation and instructions to tell you how to use it.

The MQ documentation points to the program directory which just list the contents of the ../zosmf directory. Using your psychic powers you are meant to know that you have to go the z/OSMF web console and use the WorkFlows icon. There is a book z/OSMF Programming Guide which tells you how to create a workflow – but not how to use one!

The z/OSMF documentation (which you can access from z/OSMF) is available here. It is a bit like “here is all we know about how to use it”, rather than “baby steps for the new user”. There is a lot to read, and it looks pretty comprehensive. My eyes glazed over after a few minutes reading and so it was hard to see how to get started.

Before you start

You need access to the files in the ZFS.

  • For example  /usr/lpp/mqm/V9R1M1/zosmf/provision.xml this is used without change.   This has all of the instructions that the system needs to create the workflow.
  • Copy /usr/lpp/mqm/V9R1M1/zosmf/workflow_variables.properties  to your directory.  You need to edit it and fill in all of the variables between <…>.   There are a lot of parameters to configure!

You need a working z/OSMF where you can logon, use TSO etc.

Using z/OSMF to use the work flow

When I logged on to z/OSMF the first time, I had a blank  web page. I played around with the things at the bottom left of the screen, and then I got a lot of large icons.  These look a bit of a mess – rather than autoflow them to fill the screen, you either need to scroll sideways or make your screen very wide (18 inches wide).  These icons do not seem to be in any logical order.  It starts with “Workflows”; “SDSF” is a distance away from “SDSF settings”.  I could not see how to make small icons, or to sort them.

I selected “Classic” from the option by my  userid – This was much more usable, it had a compact list of actions, grouped sensibly,  down the side of the screen,

Using “workflow” to define MQ queue managers.

Baby steps instructions to get you started are below.

  • Click on the Workflows icon.
  • From the actions pull down, select Create Workflow.
  • In the workflow definition file enter /usr/lpp/mqm/V9R1M1/zosmf/provision.xml  or what workflow file you plan to use.
  • In the Workflow variable input file, enter the name of your edited workflow_variables.properties file.  Once you have selected this the system copies the content. To use a different file, or update the file,  you have to create a new Workflow.  If you update it, any workflows based on it do not get updated.
  • From the System pull down, select the system this is for.
  • Click ‘next’.   This will validate the variables it will be using.  Variables it does not use, do not get checked.
    • It complained that CSQ_ARC_LOG_PFX = <MQ.ARC.LOG.PFX> had not been filled in
    • I changed it to CSQ_ARC_LOG_PFX = MQ.ARCLOG – and it still complained
    • It would only allow CSQ_ARC_LOG_PFX = MQ, pity as my standards were MQDATA.ARCHIVE.queue_manager etc.
  • Once the input files have been validated it displays a window “Create Workflow”.  Tick the box “Assign all steps to owner userid”.  You can (re-)assign them to people later.
  • Click Finish.
  • It displays “Procedure to provision a MQ for zOS Queue manager 0 Workflow_o” and lists all of the steps – all 22 of them.
  • You are meant to do the steps in order.   The first step has State “Ready”, the rest are “Not ready”.
  • I found the interface unreliable.  For example
    • Right click on the Title of the first item.  Choose “Assignment And Ownership”.  All of the items are greyed out and cannot be selected.
    • If you click the tick box in the first column, click on “Actions” pull down above the workflow steps.  Select “Assignment And Ownership”.  You can now  assign the item to someone else.
      • If you select this, you get the “Add Assignees” panel.  By default it lists groups.  If you go to the “Actions” pull down, you can add a SAF userid or group.
      • Select the userids or groups and use the “Add >” to assign the task to them.
  • With the list of tasks, you can pick “select all”, and assign them;  go to actions pull down, select Assignment And Ownership, and add the userids or groups.
  • Once you are back at the workflow you have to accept the task(s).   Select those you are interested in, and pick Actions -> Accept. 
  • Single left click on the Title – “Specify Queue Manager Criteria”.  It displays a tabbed pane with tabs
    • General – a description of the task
    • Details – it says who the task has been assigned to  and its status.
    • Notes – you can add items to this
    • Perform – this actually does the task.
  • Click on the “Perform” tab.  If this is greyed out, it may not have been assigned to you, or you may not have accepted it.
    • It gives a choice of environments, eg TEST, Production.  Select one
    • The command prefix eg !ZCT1.
    • The SSID.
    • Click “Next”.  It gives you Review Instructions; click Finish.
  • You get back to the list of tasks.
    • Task 1 is now marked as Complete.
    • Task 2 is now ‘ready’.
    • Left single click task 2 “Validate the Software Service Instance Name Length”.
    • The Dependencies tab now has an item “Task 1 complete”.  This is all looking good.
  • Note: Instead of going into each task, sometimes you can use the Action -> perform to go straight there – but usually not.
  • Click on Perform
    • Click next
    • It displays some JCL which you can change, click Next. 
    • It displays “review JCL”.
      • It displays Maximum Record Length 1024.  This failed for me – I changed it to 80, and it still used 1024!
    • Click Next..  When I clicked Finish, I got “The request cannot be completed because an error occurred. The following error data is returned: “IZUG476E:The HTTP request to the secondary z/OSMF instance “S0W1” failed with error type “HttpConnectionFailed” and response code “0” .”  The customising book mentions this, and you get it if you use AT-TLS – which I was not using.  It may be caused by not having the IP address in my Linux /etc/hosts file.  Later, I added the address to the /etc/hosts file on my laptop, restarted z/OSMF and it worked.
    • I unticked “Submit JCL”, and ticked “Save JCL”.  I saved it in a Unix file, and clicked finish.  It does not save the location so you have to type it in every time (or keep it in your clipboard), so not very usable.
    • Although I had specified  Maximum Record Length of 80, it still had 1024.  I submitted the job and it complained with “IEB311I CONFLICTING DCB PARAMETERS”.   I edited to change 1024 to 80 in the SPACE and DCB, and submitted it.  The JCL then worked.
    • When the rexx ran, it failed with …The workflow Software Service Instance Name (${_workflow-softwareServiceInstanceName})….  The substitution had not been done.  I dont know why – but it means you cannot do any work with it.
    • When I tried another step, this also had no customisation done, so I gave up.
    • Sometimes when I ran this the “Finish” button stayed greyed out, so I was unable to complete the step.  After I shut down z/OSMF and restarted it, it usually fixed it.
  • I looked at the job “Define MQ Queue Manager Security Permissions” – this job creates a profile to disable MQ security – it did not define the security permissions for normal use.
  • I tried the step to Dynamically allocate a port for the MQ chin.  I got the same IZUG476E error as before.   I fixed my IP address, and got another error. It received status 404 from the REST request.   In the /var/zosmfcp/data/logs/zosmfServer/logs/messages.log  I had  SRVE0190E: File not found: /resource-mgmt/rest/1.0/rdp/network/port/actions/obtain.  For more information on getting a port see here.

Many things did not work, so I gave up.

Put messages to a queue workflow. 

I tried this, and had a little (very little) more success.

As before I did

  • Workflows
  • Actions pull down
  • Create workflow.   I used
    • /usr/lpp/mqm/V9R1M1/zosmf/putQueue.xml
    • and the same variable input file
  • Lots of clicks – including  Assign all steps to owner userid
  • Click Finish.   This produced a workflow with one step!
  • Left click on the step.  Go to Perform.  This lists
    • Subsystem ID
    • Queue name
    • Message_data
    • Number of messages.
  • Click Next.
  • Click Next, this shows the job card
  • Click Next,  this shows the job.
  • Click Next.  It has “Submit JCL” ticked.  Click Finish.   This time it managed to submit the JCL successfully!
  • After several seconds it displays the “Status” page, and after some more seconds, it displays the job output.
  • There is a tabbed panel with tabs for JESMSGLG, JESJCL, JESYSMSG,SYSPRINT.
  • I had a JCL error – I had made a mistake in the  MQ libraries High level qualifier.
  • I updated my workflow_variables.properties file, but I could not find say of telling the workflow to use the update variable properties file.  To solve this I had to
    • Go back to the Workflows screen where it lists the workflows I have created. 
    • Delete the workflow instance.
    • Create a new workflow instance, which picks up the changed file
    • I then deployed it and “performed” the change, and the JCL worked.
    • I would have preferred a quick edit to some JCL and resubmit the job, rather than the relatively long process I had to follow.
  • If this had happened during the deploy a queue manager workflow this would have been really difficult.   There is no “Undo Step”, so I think you would have had to create  the De-provision workflow – which would have failed because many of the steps would not have been done, or you delete the provision workflow, fix the script and redo all the steps (or skip them).

If this all worked, would I use it?

There were too many clicks for my liking.  It feels like trying to simplify things has made it more complex.  There are many more things that could go wrong – and many did go wrong, and it was hard to fix.  Some problems I could not fix.  I think these work flows are provided as an example to the customer.  Good luck with it!

A practical guide to getting z/OSMF working.

Every product using Liberty seems to have a different way of configuring the product.  As first I thought specifing parameters to z/OSMF was great, as you do it by a SYS1.PARMLIB member.  Then I found you have other files to configure; then I found that I could not reuse my existing definitions from z/OS Connect, and MQWEB.  Then I found it erases any local copy of the server.xml file, and links to the one shipped as part of the server.xml file from the configuration files each time.   Later I used this to my advantage.  Once again I seemed to be struggling against the product to do simple things.  Having gone through the pain, and learnt how to configure z/OSMF, its configuration is OK.

You specify some parameters in SYS1.PARMLIB(IZUPRMxx) concatenation.  See here for the syntax and the list of parameters. In mid 2020 there was an APAR PH24088  which allowed you to change change these parameters dynamically, using a command such as:

SETIZU ILUNIT=SYSDA

Before you start.

I had many problems with certificates before I could successfully logon to z/OSMF.

I initially found it hard to understand where to specify configuration options, as I was expecting to specify them in the server.xml file. See z/OSMF configuration options mapping for the list of options you can specify.

If you change the options you have to restart the server.   Other systems that use Liberty have a refresh option which tells the system to reread the server.xml file.   z/OSMF stores the variables in variable strings in the bootstrap.options file, and I could not find a refresh command which refreshed the data.   (There is a refresh command which does not refresh.)  See z/OSMF commands.

 Define the userid

I used a userid with a home directory /var/zosmfcp/data/home/izusvr.   I had to issue

mkdir /var/zosmfcp
chown izusvr /var/zosmfcp

mkdir -p /var/zosmfcp/data/home/izusvr
chown izusvr /var/zosmfcp/data/home/izusvr

touch /var/zosmfcp/configuration/local_override.cfg 
chmod g+r /var/zosmfcp/configuration/local_override.cfg
chown :IZUADMIN /var/zosmfcp/configuration/local_override.cfg

Getting the digital certificate right.

I had problems using the provided certificate definitions.

  1. The CN did not match what I expected.
  2. There was no ALTNAME specified.  For example ALTNAME((IP(10.1.1.2))  (where 10.1.1.2 was the external IP address of my z/OS). The browser complained because it was not acceptable.   An ALTNAME must match the IP address or host name the data came from.  Without a valid ALTNAME you can get into a logon loop.  Using Chrome I got
    1. “Your connection is not private”. Common name invalid.
    2. Click on Advanced and “proceed to..  (unsafe)”
    3. Enter userid and password.
    4. I had the display with all of the icons.  Click on the Pul-up and  switch to “classic interface”
    5. I got “Your connection is not private”. Common name invalid, and round the loop again.
  3. The keystore is also used as the trust store, so it needs the Certificate Authority’s certificates.  Other products using Liberty use a separate trust store.  (The keystore contains the certificate the server uses to identify itself.  The trust store contains the certificates (such as Certificate Authority certificates) to validates certificates sent from clients to the server).   With z/OSMF there is no definition for the trust store.   To make the keystore work as a trust store, the keystore needs:
    1.  the CA for the server (z/OSMF talks to itself over TLS) this means the each end of the conversation within the server, needs it to validate the server’s certificate.
    2. the CA for any certificates in any browsers being used.
    3. I had to use the following statements to convert my keystore to add the trust store entries.
      RACDCERT ID(IZUSVR) CONNECT(CERTAUTH -
      LABEL('MVS-CA') RING(KEY) )

      RACDCERT ID(IZUSVR) CONNECT(CERTAUTH -
      LABEL('Linux-CA2') -
      RING(KEY ) USAGE(CERTAUTH))

Reusing my existing keyring

Eventually I got this to work.  I had to…

    1. Connect the CA of the z/OS server into the keyring.
    2. Update /var/zosmfcp/configuration/local_override.cfg for ring //START1/KEY2
    3. KEYRING_NAME=KEY2
      KEYRING_OWNER_USERID=START1
      KEYRING_TYPE=JCERACFKS
      KEYRING_PASSWORD=password

The z/OSMF started task userid requests CONTROL access to the keyring. 

It requests CONTROL access (RACF reports this!), but UPDATE  access seems to work See RACF: Sharing one certificate among multiple servers.

 With only READ access I got.

CWWKO0801E: Unable to initialize SSL connection. Unauthorized access was denied or security settings have expired. Exception is javax.net.ssl.SSLException: Received fatal alert: certificate_unknown

If it does not have UPDATE access, then z/OSMF cannot see the private certificate.

Use the correct keystore type. 

My RACF KEYRING keystore only worked when I had a keystore type of JCERACFKS.  I specified it in /var/zosmf/configuration/local_override.cfg

KEYSTORE_TYPE=JCERACFKS 

Before starting the server the first time

If you specify TRACE=’Y’, either in the procedure or as part of the start command, it traces the creating of the configuration file, and turns on the debug script.  TRACE=’X’ gives a BASH trace as well.

It looks like the name of the server is hard coded internally as zosmfServer, and the value in the JCL PARMS= is ignored.

Once it has started

If you do not get the logon screen, you may have to wait.  Running z/OS under my Ubuntu Laptop is normally fine for editing etc.  it takes about 10 minutes to start z/OSMF.  If you get problems, with incomplete data displayed, or messages saying resources not found, wait and see if it gets resolved.  Sometimes I had to close and restart my browser.

Useful to know…

  1. Some, but not all, error messages are in the STDERR output from the started task.
  2. The logs and trace are in /var/zosmfcp/data/logs/zosmfServer/logs/.  Other products using Liberty have /var/zosmfcp/data/logs/zosmfServer/logs/ so all the files are under /var/zosmfcp/zosmfServer
  3. The configuration is in /var/zosmfcp/configuration and /var/zosmfcp/configuration/servers/zosmfServer.   This is a different directory tree layout from other Liberty components.
  4. If you want to add one JVM option, edit the local_override.cfg and add   JVM_OPTIONS=-Doption=value.  See here if you want to add more options.  I used JVM_OPTIONS=-Djavax.net.debug=ssl:handshake to give me the trace of the TLS handshake.
  5. If you have problems with certificate not being found on z/OS, you might issue the SETROPTS  command to be 100% sure that what is defined to RACF is active in RACF.  Use SETROPTS RACLIST(DIGTCERT,DIGTRING,RDATALIB) refresh.  
  6. Check keyring contents using racdcert listring(KEY2) id(START1)
  7. Check certificate status using RACDCERT LIST(LABEL(‘IZUZZZZ2’ )) ID(START1) and check it has status:trust and the dates are valid.
  8. If your browser is not working as expected – restart it.

Tailoring your web browser environment.

Some requests, for example workflow, use a REST request to perform an action.  Although I had specified a HOSTNAME of 10.1.1.2, z/OSMF used an internal address of S0W1.DAL-EBIS.IHOST.COM When the rest request was issued from my browser – it could not find the back end z/OS.  I had to add this site to my Linux /etc/hosts

10.1.1.2   S0W1.DAL-EBIS.IHOST.COM 

Even after I had resolved the TCPIP problem on z/OS which caused this strange name to be used, z/OSMF continued to use it. When I recreated the environment the problem was resolved. I could not see this value in any of the configuration files.

Getting round the configuration problems

I intially found it hard so specify additional configuration options, to override what was provided by z/OSMF.  For example reuse what I had from other Liberty servers.

I changed the server.xml file to include

<include optional="false" location="${server.config.dir}/colin.xml"/> 

This allowed me to put things in the colin.xml file.  Im sure this solution is not recommended by IBM, but it is a practical solution.  This file may be read only to you.

You should not need this solution if you can use the z/OSMF configuration options mapping.

What should I monitor for MQ on z/OS – buffer pool

For the monitoring of MQ on z/OS, there are a couple of key metrics you need to keep an eye on for the buffer pools, as you sit watching the monitoring screen.

A quick overview of MQ buffer pools

An inefficient queue manager could have all messages stored on disk, in page sets.  An efficient queue manager would cache hot pages in memory to they can be accessed without doing any I/O.

The MQ Buffer Manager component does this caching, by using buffers  (so no surprise there).

A simplified view of the operation is as follows

  1. Messages are broken down into 4KB pages.
  2. Getting the contents of a pageset page, if the page is not in the buffers, the buffer manager reads it from disk into a buffer.
  3. If a page is requested and the contents are not required (for example it is about to be overwritten as part of a put) it does not need to read it from disk.
  4. If the page is updated, for example a new message, or a field in the message is updated during get processing,  the page is not usually written to disk immediately.   The write to disk is deferred (using a process called Deferred Write Process – another non surprise in the naming convention).  This has the advantage that there can be many updates to a page before it is written to disk.
  5. Any buffer page which has not been changed, and is not currently being used, is a free page.

If the system crashes, non persistent messages are thrown away, and persistent messages can be rebuilt from the log.

To reduce the restart time, pages containing persistent data are written out to the page set periodically.   This is driven by the log data set filling up which causes a checkpoint.  Updates which have been around for more than 2 checkpoints are written to disk.  During restart the page set knows how far back in the log restart needs to go.

In both cases, checkpoint and buffer pool filling up (when there are less than 15 % of free =85% in use) , once a page has been successfully written to the page set, the buffer is marked as free.

Pages for non-persistent messages can be written out to disk.

If the buffer pool is critically short of free buffers, and there are less than 5% free buffers, then pages are written to the page set immediately rather than use the deferred write process.  This allows some application work to be done while the buffer manger is trying to make more free pages available.

What is typical behaviour?

The buffer pool is working well.

When a message is got, the buffer is already in the buffer pool so there is no I/O to read from the page set.

The buffer pool is less than 85% full (has more than 15% free pages), there is periodic I/O to the page set because pages with persistent data are written to the page set at checkpoint.

The buffer pool fills up and has more 85% in-use pages.

This can occur if the number and size of the messages being processed is bigger than the size of the buffer pool. This could be a lot of messages in a unit of work,  big messages using lots of pages, or lots of transactions putting and getting messages.  It can also occur when there are lots of puts, but no gets.

If the buffer pool has between 85%  and 95% of in-use pages( between 15%  and 5% free pages),  the buffer manager is working hard to keep free pages available.

There will be I/O intermittently at checkpoints, and a steady I/O as the buffer manager writes the pages to the page set.

If messages are being read from disk, there will be read activity from the pageset, but the buffer pool page can be reused as soon as the data has been copied from the buffer pool page.

The buffer pool has less than 5% free pages.

The buffer manager is in overdrive.  It is working very hard to keep free pages in the buffer pool.   There will be pages written out to the page set as it tries to increase the number of free pages.  Gets may require reads from page set I/O.  All of this I/O can cause I/O contention and all the page set I/Os slow down, and so MQ API request using this buffer pool slow down.

What should be monitored

Most cars these days have a “low fuel light” and a “take me to a garage” light.  For monitoring we can provide similar.

Monitor the % buffer pool full.

  • If it is below 85% things are OK
  • If it is between 85% and 95% then this needs to be monitored, it may be “business as usual”.
  • If it is >=95% this needs to be alerted.  It can be caused by applications or channels not processing messages

Monitoring the number of pages written does not give much information.

It could be caused by a checkpoint activity, or because the buffer pool is filling up.

Monitoring the number of pages read from the page set can provide some insight.

If you compare today with this time last week you can check the profile is similar.

If the buffer pool is below 85% used,

  • Messages being got are typically in the buffer pool so there is no read I/O.
  • If there is read I/O this could be for messages which were not in the buffer pool – for example reading the queue after a queue manager restart.

If the buffer pool than 85% in-use  and less than 95% in-use  this can be caused by a large message work load coming in, and MQ is being “elastic” and queuing the messages.  Even short lived messages may be read from disk.  The number of read I/Os give an indication of throughput.  Compare this with the previous week to see if the profile is similar.

If the buffer pool is more than 95% in-use this will have an impact on performance, as every page processed is likely to have I/O to the page set, and elongated I/O response time due to the contention.

What to do

You may want “operators notes” either on paper or online which describe the expected status of the buffer pools on individual queue managers.

  1. PROD QMPR
    1. BP 1 green less than 85% busy
    2. BP 2 green less than 85% busy
    3. BP3 green except for Friday night when it goes amber  read I/O rate 6000 I/O per minute.
  2. TEST QMTE
    1. BP 1 green less than 85% busy
    2. BP 2 green less than 85% busy
    3. BP 3 usually amber – used for bulk data transfer

What to do the buffer statistics mean?

There are statistic on the buffer pool usage.

  1. Buffer pool number.
  2. Size of buffer pool – when the data was produced.
  3. Low – the lowest number of free pages in the interval.  100* (Size – log)/Size  gives you % full.
  4. Now – the current number of free pages in the interval.
  5. Getp – the number of requests ‘get page with contents’.  If the page is not in the buffer pool then it is read from the page set.
  6. Getn.   A new page is needed.   The contents are not relevant as it is about to be overwritten.  If the page is not in the buffer pool, just allocate a buffer, and do not read from the page set.
  7. STW – set write intent.  This means the page was got for update. I have not seen a use for this. For example
    1. A put wants to put part of a message on the page
    2. A get is being done and it wants to set the “message has been got” flag.
    3. The message has been got, and so pointers to a page need to be updated.
  8. RIO -the number of read requests to the page set.  If this is greater than zero
    1. The request is for messages which have not been used since the queue manager started
    2. The buffer pool had reached 85%, pages had been moved out to the page set, and the buffer has been reused.
  9. WIO the number of write I/Os that were done.  This write could be due to a checkpoint, or because the buffer pool filled up.
  10. TPW total pages written, a measure of how busy the buffer pool was.   This write could be due to a checkpoint, or because the buffer pool filled up.
  11. IMW – immediate write.  I have not used this value, sometimes I observe it is high, but it is not useful.  This can be caused by
    1. the buffer pool being over 95% full, so all application write I/O is synchronous,
    2. or a page was being updated during the last checkpoint, and it needs to be written to the page set when the update has finished.  This should be a small number.  Frequent checkpoints (eg every minute) can increase this value.
  12. DWT – the number of times the Deferred Write processor was started.  This number has little value.
    1. The DWP could have started and been so busy that it never ended, so this counter is 1.
    2. The DWP could have started, written a few pages and stopped – and done this repeatedly, in this case the value could be high.
  13. DMC.   The number of times the buffer pool crossed the 95% limit.  If this is > 0 it tells you the buffer pool crossed the limit
    1. This could have crossed just once, and stayed over 95%
    2. This could have gone above 95%, then below 95% etc.
  14. SOS – the buffer pool was totally short on storage – there were no free pages.  This is a critical indicator.  You may need to do capacity planning, and make the buffer pool bigger, or see if there was a problem where messages were not being got.
  15. STL – the number of times a “free buffer” was reused.  A buffer was associated with a page of a page set.   The buffer has been reused and is now for a different page.  If STL is zero, it means all pages that were used were in the buffer pool
  16. STLA – A measure of contention when pages are being reused.  This is typically zero.

Summary

Now you know as much as I do about buffer pools you’ll see that the %full (or %free) is the key measure.  If the buffer pool is > 85% used pages, then the I/O rate is a measure as to how hard the buffer manager is running.

What should I monitor for MQ on z/OS – logging statistics

For the monitoring of MQ on z/OS, there are a couple of key metrics you need to keep an eye on for the logging component, as you sit watching the monitoring screen.

I’ll explain how MQ logging works, and then give some examples of what I think would be key metrics.

Quick overview of MQ logging

  1. MQ logging has a big(sequential) buffer for logging data, it wraps.
  2. Application does an MQPUT of a persistent message.
  3. The queue manager updates lots of values (eg queue depth, last put time) as well as move data into the queue manager address space.  This data is written to log buffers. A 4KB page can hold data from many applications.
  4. An application does an MQCOMMIT.  MQ takes the log buffers up and including the current buffer and writes it to the current active log data set.  Meanwhile other applications can write to other log buffers.
  5. The I/O finishes and the log buffers just written can be reused.
  6. MQ can write up to 128 pages in an I/O. If there are more than 128 buffers to write there will be more than 1 I/O.
  7. If application 1 commits, the IO starts,  and then application 2 commits. The I/O for the commit in application 2 has to wait for the first set of disk writes to finish, before the next write can occur.
  8. Eventually the active log data set fills up.  MQ copies this active log to an archive data set.  This archive can be on disk or tape.   This archive data set may never be used again in normal operation.  It may be needed for recovery of transactions or after a failure.   The Active log which has just been copied can now be reused.

What is interesting?

Displaying how much data is logged per second.

Today       XXXXXXXXXXXXXXXXXXXX
Last week XXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Yesterday XXXXXXXXXX          
      0                     100MB/Sec    200 MB/Sec

This shows that the logging rate today is lower than last week.   This could be caused by

  1. Today is just quieter than last week
  2. There is a problem and there are fewer requests coming into MQ.   This could be caused by problems in another component, or a problem in MQ.    When using persistent messages the longest part of a transaction is the commit and waiting for the log disk I/O.  If this I/O is slower it can affect the overall system throughput.
  3. You can get the MQ log IO response times from the MQ log data.

Displaying MQ log I/O response time

You can break down where time is spent in doing I/O into the following area

  1. Scheduling the I/O – getting the request into the I/O processor on the CPU
  2. Sending the request down to the Disk controller(eg 3990)
  3. Transferring data
  4. The I/O completes, and send an interrupt to z/OS, z/OS has to catch this interrupt and wake up the requester.

 Plotting the I/O time does not give an entirely accurate picture, as the time to transfer the data depends on the amount of data to transfer.  On a well run system there should be enough capacity so the other times are constant.    (I was involved in a critical customer situation where the MQ logging performance “died” every Sunday morning.   They did backups, which overloaded the I/O system).

In the MQ log statistics you can calculate the average I/O time.  There are two sets of data for each log

  1. The number of requests, and sum of the times of the requests to write 1 page.  This should be pretty constant, as the data is for when only one 4KB was transferred
  2. The number of requests, and sum of the times of the requests to more than 1 page.  The average I/O time will depend on the amount of data transferred.
  • When the system is lightly loaded, there will be many requests to write just one page. 
  • When big messages are being processed (over 4 KB) you will see multiple pages per I/O.
  • If an application processes many messages before it commits you will get many pages per I/O.   This is typical of a channel with a high achieved batch size.
  • When the system is busy you may find that most of the I/O write more than one page, because many requests to write a small amount of data fills up more than one page.

I think displaying the average I/O times would be useful.   I haven’t done tried this in a customer environment (as I dont have customer environment to use).    So if the data looks like

Today         XXXXXXXXXXXXXXXXXXXXXXXX
Last week     XXXXXXXXXXXXXXXXXXXXXXXXXXXXX  
One hour ago XXXXXXXXXXXXXXXXXXX
time in ms 0 1 2 3

it gives you a picture of the I/O response time.

  • The dark green is for I/O with just one page, the size of the bar should be constant.
  • The light green is for I/O with more than one page, the size of the bar will change slightly with load.  If it changes significantly then this indicates a problem somewhere.

Of course you could just display the total I/O response time = (total duration of I/Os) / (total number of I/Os), but you lose the extra information about the writing of 1 page.

Reading from logs

If an application using persistent messages decides to roll back:

  • MQ reads the log buffers for the transaction’s data and undoes any changes.
  • It may be the data is old and not in the log buffers, so the data is read from the active log data sets.
  • It may be that the request is really old (for example half an hour or more), MQ reads from the archive logs (perhaps on tape).

Looking at the application doing a roll back, and having to read from the log.

  • Reading from buffers is OK.   A large number indicates application problem or a DB/2 deadlock type problem.  You should investigate why there is so much rollback activity
  • Reading from Active logs … . this should be infrequent.  It usually indicates an application coding issue where the transaction took too long before commit.  Perhaps due to a database deadlock, or bad application design (where there is a major delay before the commit)
  • Reading from Archive logs… really bad news…..  This should never happen.

Displaying reads from LOGS

Today         XXXXXXXXXXXXXXXXXXXXXXXX
Last week     X
One hour ago  XXXXX
rate          0        10    20     40

Where green is “read from buffer”, orange is “read from active log”, red is “read from Archive log. Today looks a bad day”.

Should I monitor MQ – if so what for ?

I’ve been talking to someone about using the MQ SMF data, and when would it be useful. There is a lot of data. What are the important things to watch for, and what do I do when there is a problem?

Why monitor?

From a performance perspective there are a couple of reasons why you should monitor

Today’s problems

Typical problems include

  1. “Transaction slow down”, people using the mobile app are timing out.
  2. Our new marketing campaign is a great success – we have double the amount of traffic, and the backend system cannot keep up.
  3. The $$ cost of the transactions has gone up.   What’s the problem.

With problems like transaction slow down, the hard part is often to determine which is the slow component.  This is hard when there may be 20 components in the transaction flow, from front end routers, through CICS, MQ, DB2, IMS, and WAS, and the occasional off-platform request.

You can often spot problems because work is building up, (there are transactions or messages queued up), or response times are longer.  This can be hard to interpret because “the time to put a message and get the reply from MQ is 10 second” may at first glance be an MQ problem – but MQ is just the messenger, and the problem is beyond MQ.  I heard someone say that the default position was to blame MQ, unless the MQ team could prove it wasn’t them.

Yesterday’s problem

Yesterday/last week you had a problem and the task force is looking into it.  They now want to know how MQ/DB2/CICS/IMS etc was behaving when there was a problem.  For this you need Historical Data.  You may have summary data recorded on a minute by minute basis, or you may have summary data recorded over an hour.   If the data is averaged over an hour you may not see any problems. A spike in workload may be followed by no work, and so on average every thing is OK.
It is useful to have “maximum value in this time range”. So if your maximum disk I/O time was 10 seconds in this interval at 16:01:23:45 and the problem occurred around this time, it is a good clue to the problem.

Tomorrow’s problem.

You should be able to get trending information.  If your disk can sustain an I/O rate of 100MB a second, and you can see that every week at peak time, you are logging an extra 5MB/second, this tells you that you need to so something to fix it, either get faster disks, or split the work to a different queue manager.

Monitoring is not capacity planning.

Monitoring is how is it performing in the current configuration.  Monitoring may show a problem, but it is up to the capacity and tuning team to fix it.  For example – how big a buffer pool do we need is a question for the capacity team.  You could make the buffer pools use GB of buffers – or keep the buffer pools smaller and let MQ “do the paging to and from disk”.

How do you know when a ‘problem’ occurs.

I remember going to visit a customer because they had a critical problem with the performance on MQ.  There were so many things wrong it was hard to know where to start.  The customer said that the things I pointed out were always bad – so they were not the current problem.  Eventually we found the (application) problem.  The customer was very grateful for my visit – but still did not fix the performance problems.

One thing to learn from this story is that you need to compare a bad day with a good day, and see what is different.  This may mean comparing it with the data from the same time last week, rather than from an hour ago.  I would expect that last week’s profile should be a good comparison to this week.   One hour ago there may not have been any significant load.

With MQ, there is a saying “A good buffer pool is an empty buffer pool”.  Does a buffer pool which has filled up, and causing lots of disk I/O mean there is a problem?  Not always.  It could be MQ acting a queueing system and if you wait for half an hour for the remote system to restart all of the messages will flow, and the buffer pool become empty.  If you know this can happen, it it good to be told it is happening, but the action may be “watch it”.  If this is the first time it has happened, you may want to do a proper investigation, and find out which queues are involved, which channels are not working, and what remote system are down.

What information do I need?

It depends on what you want.  If you are sitting in the operations room watching the MQ monitor while sipping a cup of your favourite brew, then you want something like modern cars.  If there is a problem, a red light on the dashboard light up meaning “You need to take the car to the garage”.   The garage can then put the diagnostic tools onto the engine and get the reason.

You want to know if there is a problem or not.  You do not need to know you have a problem to 3 decimal places – yes, maybe, or no is fine.

If you are investigating a problem from last week, you, being the role of the garage mechanic, need the detailed diagnostics.

When do you need the data?

If you are getting the data from SMF records you may get a record every 1 minute, or every half an hour.  This may not be frequent enough while there is a problem.  For example if you have a problem with logging, you want to see the data on a second by second basis, not once every 30 minutes.

Take the following scenario.  It is 10:59 – 29 minutes into the period when you get an SMF (or online monitor) data.

So far in this interval, there have been 100,000  I/Os.   The total time spent doing I/Os is 100 seconds,  By calculation the average time for an IO is 1 millisecond.  This is a good response time.

You suddenly hit a problem, and the IO response time goes up to 100 ms, 10 more I/Os are done.

The total number of I/Os is now 100,010 , the time spent doing I/OS is now 101 seconds.  By calculation the average I/O time is now 1.009899 milliseconds.  This does not show there is a problem as this is within typical variation.

If you can get the data from a few seconds ago and now you can calculate the differences

  1. number of IOs 100,010 – 100,000 = 10
  2. time spent doing I/O 101 -100 = 1 second
  3. Average I/O time 100 ms – wow this really shows the problem, compared to calculating the value from the 30 minute set of statistics which showed the time increasing from 1 ms to 1.01 ms.
This shows you need granular data perhaps every minute – but this means you get a lot of data to manage.
 

You have to be brave to climb back up the slippery slope.

It is interesting that you notice things when you are sensitive to it.  Someone bought a new car in “Night Blue” because they wanted a car that no one else had – she had not seen any cars of that colour.  Once she got it, she noticed many other cars of the same make and colour.

I was sliding down a slippery slope, and realised that another project I was working on had also gone down a slippery slope.

My slippery slope.

I wanted a small program to do a task.  It worked, and I realised I needed to extend it, so with lots of cutting and pasting, and editing the file soon got to 10 times the size.  I then realised the problem was a bit more subtle and started making even more changes.  I left it, and went to have dinner with a glass of wine.

After the glass of wine I realised that now I understood the problem, there were easier (and finite) solutions to the problem.  Should I continue down the slippery slope or, now that I understood the problem, start again.

I tried a compromise, I wrote some Python code to process a file of data, to generate the C code, which I then used. As this worked, I used this solution, and so; yes it was worth stopping and going up the slippery slope and finding a different solution.

I had a cup of tea to celebrate, and realised that I could see the progress down a slippery path for another project I was working on.

The slippery slope of a product customisation.

I was  trying to configure a product, and thought the configuration process was very complex.    I could see the slippery slope the development team had taken with the product to end up with a very complex solution to a simple problem.

I looked into the configuration expecting to see a complex product which needed a complex configuration tool, but no, it looked just like many other products.

Many products (including MQ on z/OS) configuration consists of

  1. Define some VSAM files and initialize them
  2. Define some non VSAM files
  3. Create some system definitions
  4. Specify parameters for example MQ’s TCP/IP port number.

The developer of the product I was trying to install had realised that there were many parameters, for example the high level qualifier of data sets, as well as product specify parameters such as TCP/IP port number, and so developed some dialogs to make it easy to enter and validate the parameters.  This was good – it means that the value can be checked before they are used.

The next step down the slippery slope was to generate the JCL for the end user,  this was OK, but instead of having a job for each component, they had one job with “If configuring for component 1 then create the following datasets”  etc.  In order to debug problems with this, they then had to capture the output, and save it in a dataset.   They then needed a program to check this output and validate it.  By now they were well down the slippery slope.

The same configuration parameter was needed in multiple components, and rather use one file, used by all JCL, they copied the parameter into each component.

During  configuration it looks as if it copied files from the SMP target libraries, to intermediate libraries, then to instance specific libraries.  I compared the contents of the SMP target libraries with the final libraries and they were 95% common.  It meant each instance had its own self contained set of libraries. 

I do not want to rerun the configuration in case it overwrites the manual tweaking I had to do.

I would much rather have more control over the configuration, for example put JCL overrides such as where to allocate the volumes, in the JCL, so it is easy to change.

A manager said to me once, the first thing you should do every day, once you have your first coffee is to remind yourself of the overall goal, and ask if the work you are doing is for this goal, and not a distraction.  There is a popular phrase  – when you’re up to your neck in alligators, it’s hard to remember that your initial objective was to drain the swamp.

 

If the facts do not match your view – do not ignore the facts.

It is easy to ignore facts if you do not like them, sometimes this can have major consequences.    I recently heard two similar experiences of this.

We were driving along in dense fog trying to visit someone who lived out in the country.  We had been there in the daylight and thought we knew the way.  The conversation went a bit like the following

  • It is along here on the right somewhere – there should be a big gate
  • There’s a gate.  Oh they must have painted it white since last time we were here
  • The track is a bit rough, I thought it was better than this.
  • Ah here’s another gate.   They must have installed it since we were here last.
  • Round the corner and here we – oh where’s the house gone?

Of course we we had taken the wrong side road.  We had noticed that the facts didn’t match our picture and so we changed the facts. Instead of thinking “that gate is the wrong colour” we thought “they must have painted the gate”.  Instead of “we were not expecting that gate” we thought “they must have installed a new gate”.  It was “interesting” backing the car up the track to the main road in the dark.

I was trying to install a product and having problems.  I had already experienced a few problems where the messages were a bit vague.  I had another message which implied I had mis-specified something.  I checked the 6 characters in a file and thought “The data is correct, the message must be wrong, I’ll ignore it”.   I gave up for the day.  Next day I looked at the problem, and found I had been editing the wrong file.  The message was correct and I had wasted 3 hours.