The power of exclude in ISPF edit

I hope most people know about the ISPF edit exclude facility.  You can issue commands like

X ALL /* exclude every thing */
F XXXX ALL
F YYYYY ALL WORD
DELETE ALL X /* delete everything which is excluded */
RESET
X ALL
F ZZZZ all
F AAAA all
FLIP /* make the excluded visible, and the displayed hidden */
CHANGE 'ABC' 'ZYX' ALL NX /* do not change hidden lines */

You can also exploit this in macros.  See ISREDIT XSTATUS.

/* rexx */ 
ADDRESS ISPEXEC
'ISREDIT MACRO'
"ISREDIT EXCLUDE 'AAA' ALL PREFIX" /* just like the command*/
"ISREDIT LOCATE .ZLAST" /* go to the last line */
"ISREDIT (r,c) = CURSOR" /* get the row and column number */
/* r has the row number of the last line */
do i = 1 to r
"ISREDIT (xnx) = XSTATUS "i
say "Line exclude status is" i xnx
"ISREDIT (line) = LINE " i /* extract the line */
if (...) then
"ISREDIT XSTATUS "i" = X" /* exclude it */
end

You can also see if a line has been changed this edit session:

"ISREDIT (status) = LINE_STATUS "i /* for line i */
if (substr(status,8,1) = '1') then
say "Line" i " was changed this session "

 

 

It is now so easy to have hide/show text in your html page. Auto-twisties are here.

Click me to show the rest of the information. Go on, click me

This text is hidden until the summary line is clicked.

 

This is done using

<details><summary> This is displayed</summary>this is hidden until the summary is clicked</details>

So easy and impressive isn’t it. You used to have to set up scripts etc to do this. Now it is built in. Of course,  you’ll need to format the summary so it looks clickable.  

Note: Another technique, the accordion, requires a java script for it to work. <details… works without java script.

You can also have hover test using <span title = “hovertext”>displayed text</span>.

How to backup only key data sets on z/OS

I’ve been backing up my datasets on z/OS, and wondered what the best way of doing it was.

I wanted to backup datasets containing data I wanted to keep, but did not want to backup other data sets which could easily be recreated, such as IPCS dump dataset, the output of compiles, or the SMF records.

DFDSS has a backup and restore program which is very powerful.  With it you can

  • Process data sets under a High Level Qualifier – include or exclude data sets.
  • Backup only changed data sets
  • Backup individual files in a ZFS or USS – but this is limited, you have to explicitly specify the files you want to backup.   You cannot backup a directory

You cannot backup individual members of a PDS(E).   You have to backup the whole PDS(E),   If you need to restore a member, restore the backup with a different HLQ and select the members from that.

What should I use?

I tend to use XMIT and DFDSS – the Storage Management component on z/OS. This tends to be used by the data managers as it can backup groups of data sets, volumes, etc..

Backing up using XMIT.

This has the advantage that the output file is a card image, which is a portable format.

I have a job

//MYLIBS1 JCLLIB ORDER=USER.Z24A.PROCLIB 
// SET TODAY='D201224'
//S1 EXEC PROC=BACKUP,P=USER.Z24A.PARMLIB,DD=&TODAY.
//S1 EXEC PROC=BACKUP,P=USER.Z24A.PROCLIB,DD=&TODAY.

Where

  • P is the name of the dataset
  • TODAY  – is where I set today’s date.

The backup procedure has

//BACKUP PROC P='USER.Z24A.PROCLIB',DD='UNKNOWN' 
//S1 EXEC PGM=IKJEFT01,REGION=0M,
// PARM='XMIT A.A DSN(''&P'') OUTDSN(''BACKUP.&P..&DD'')'
//SYSPRINT DD SYSOUT=*
//SYSTSPRT DD SYSOUT=*
//SYSTSIN DD *
// PEND

The command that gets generated when P is USER.Z24A.PROCLIB and DD=D201224 is

XMIT A.A DSN('USER.Z24A.PROCLIB') OUTDSN('BACKUP.USER.Z24A.PROCLIB.D201224') 

This makes it easy to find the backups for a file, and a particular data.

To restore a file you use command TSO RECEIVE INDSN(‘BACKUP.USER.Z24A.PROCLIB.D201224’)  .

Using DFDSS to backup

This is a powerful program, and it is worth taking baby steps to understand it.

The basic job is

//IBMDFDSS JOB 1,MSGCLASS=H 
//S1 EXEC PGM=ADRDSSU,REGION=0M,PARM='TYPRUN=NORUN'
//TARGET DD DSN=COLIN.BACKUP.DFDSS,DISP=(MOD,CATLG),
// SPACE=(CYL,(50,50),RLSE)
//SYSPRINT DD SYSOUT=*
//SYSIN DD *
DUMP -
DATASET(INCLUDE(COLIN.JCL,COLIN.WLM,COLIN.C) -
BY(DSCHA,EQ,YES)) -
OUTDDNAME(TARGET) -
COMPRESS
/*
  • For the syntax of the dump data set command see here.
  • This dumps the specified data sets, COLIN.JCL. COLIN.WLM, COLIN.C, takes them and puts them in one file through TARGET.   TARGET is defined a dataset (COLIN.BACKUP.DFDSS).
  • This does not actually do the backup because it has TYPRUN=NORUN.
  • You can specify many filter criteria, in the BY(…) such as last reference, size, etc.  See here.
  • The BY(DSCHA,EQ,YES) says dump datasets only if they have the “changed flag” set.  The Changed flag is set when a data set was open for output.  Using ADRDSSU with the RESET option resets the changed flag.   This allows you to backup only data sets which have changed – see below.
  • It compresses the files as it backs up the files.

I did have

DATASET(INCLUDE(COLIN.**) - 
EXCLUDE(COLIN.O.**,COLIN.SMP*.**,COLIN.DDIR ) -
BY(DSCHA,EQ,YES)) -

Which says backup all data sets with the High Level Qualifier COLIN.**, but exclude the listed files.  I ran this using TYPRUN=NORUN, and this listed 100+ datasets.   Whoops, so I changed it to explicitly include the files I wanted to backup.  Once  I had determined the files I wanted to backup I removed the TYPRUN=NORUN, and backed up the datasets.

Using DFDSS to restore

You can restore from the DFDSS backups using a job like

//S1 EXEC PGM=ADRDSSU,REGION=0M,PARM='TYPRUN=NORUN' 
//TARGET DD DSN=COLIN.BACKUP.DFDSS,DISP=SHR
//SYSPRINT DD SYSOUT=*
//SYSIN DD *
RESTORE -
DATASET(INCLUDE(COLIN.C) ) -
RENAME(COLINN) -
INDDNAME(TARGET)
/*

This says restore the files specified in the INCLUDE…  rename the HLQ to be COLINN.  From the dataset via //TARGET.

Initially I specified PARM=’TYPRUN=NORUN’ so it did not actually try to restore the files.   It reported

THE INPUT DUMP DATA SET BEING PROCESSED IS IN LOGICAL DATA SET FORMAT AND WAS CREATED BY Z/OS DFSMSDSS 
VERSION 2 RELEASE 4 MODIFICATION LEVEL 0 ON 2020.359 17:16:44
DATA SET COLINN.C WAS SELECTED
PROCESSING BYPASSED DUE TO NORUN OPTION
THE FOLLOWING DATA SETS WERE SUCCESSFULLY PROCESSED
COLIN.C

From the time stamp 2020.359 17:16:44 we can see I was using the expected backup.

Once you are happy you have the right backup, and list of data sets, you can remove the PARM=’TYPRUN=NORUN’ to restore the data.

If you have backed up COLIN.JCL, and SUE.JCL, and try to rename on restore ( so you do not overwrite existing files) it would fail because if would create COLINN.JCL and then try to create COLINN.JCL from the other file!   To get round this using INCLUDE(COLIN.**) RENAMEN(COLINN) and INCLUDE(SUE.*) renamen(SUEN) .

 

What’s in the backup?

You can use the following to list the contents  (with TYPRUN=NORUN)

RESTORE - 
DATASET(INCLUDE(**) ) -
INDDNAME(TARGET)

Note: that because this job does not have REPLACE, it will not overwrite any files.

Using advanced backup facilities.

Each dataset has a changed-flag associated with it.   If this bit is on, the data set has been changed.  You can display this in the data set section of ISMF.  Field 24 – CHG IND, or if you have access to the DCOLLECT output, it is in one of the flags.

If you use

DUMP - 
DATASET(INCLUDE(COLIN.JCL,COLIN.WLM,COLIN.C) -
BY(DSCHA,EQ,YES)) -
RESET -
OUTDDNAME(TARGET) -
COMPRESS

it will backup the data sets, and reset the changed flag.  In my case it backed up the 3 data sets I had specified.

When I reran the same job, it backup up NO data sets, giving me a message

ADR383W (001)-DTDSC(01), DATA SET COLIN.JCL NOT SELECTED, 01.
Where  01 means  The fully qualified data set name did not pass the INCLUDE, EXCLUDE, and/or BY filtering criteria.

This is because I had specified BY(DSCHA,EQ,YES)) which says filter by Data Sets with the CHAnge flag on (DSCSHA) flag on.  The first DUMP request RESET the flag, the second DUMP job skipped the data sets.

You can exploit this by backing up all data sets once a week, and just changed data sets during the week.

You might need to keep the output of the dump job in member of a PDS, so you can search for your dataset name to find the date when a backup was done which included the file.

How many backups should I keep?

This depends on if you are backing up all, or just changed files.  You can use GDG (see here) where you use a generation of dataset.  If you specify 3 generations, then when you create the 4th copy, it deletes copy 1 etc.

How can I replicate the RACF definitions for MQ on z/OS?

If you are the very careful person who makes all updates to RACF only through batch jobs, then this is easy – take the old jobs, and change the queue manager name and rerun them.

For the other 99.99% of us,  read on…

Even if you have been careful to keep track of any changes to security definitions,  someone else may have made a change either using the native TSO commands, or via the ISPF panels. 
You can list the RACF database, but there is no easy way of listing the RACF database in command format, to allow you to do a global rename, and submit the commands.

I have found two ways of extracting the RACF definitons.

  1. Using an unloaded copy of the RACF database
  2. Using RACF commands to extract and recreate the requests

Using an unloaded copy of the RACF database

I discovered dbsync on a RACF tools repository which does most of the hard work.   You can run a RACF utility to unload the RACF database into a flat file (omitting sensitive information like passwords etc).  Dbsync is a rexx program which takes two copies of an unloaded database, and generates the RACF commands for the differences. I simply used my existing unloaded file and a null file, and got out the commands to create all of the entries.

The steps are

  1. Unload the RACF database
  2. Get dbsync into your z/OS system
  3. Run DBsync
  4. Edit the files, and remove all lines which are not relevant
  5. Run the output to create/modify the definitions

Unload the database

//IBMUSUN JOB 1,MSGCLASS=H 
//* use the TSO RVARY command to display databases
//UNLOAD EXEC PGM=IRRDBU00,PARM=NOLOCKINPUT
//SYSPRINT DD SYSOUT=*
//INDD1 DD DISP=SHR,DSN=SYS1.RACFDS
//OUTDD DD DISP=(MOD,CATLG),DSN=COLIN.RACF.UNLOAD,
// SPACE=(CYL,(1,1)),DCB=(LRECL=4096,RECFM=VB,BLKSIZE=13030)

Of course this assumes you have the authority to create this file.  If not ask a friendly sysprog to run the command, edit the to output delete all records which do not have MQ in them.

Run dbsync

I had to make the following changes

  1. Dataset 1 was the dataset I created above
  2. Dataset 2 was a dummy

Modify the sort step to output to a temporary output file

//COLINRA JOB 1,MSGCLASS=H 
//* ftp://public.dhe.ibm.com/eserver/zseries/zos/racf/dbsync/
//SORT1 EXEC PGM=SORT
//SYSOUT DD SYSOUT=*
//SORTIN DD DISP=SHR,DSN=COLIN.RACF.UNLOAD
//SORTOUT DD DISP=(NEW,PASS),DSN=&TEMP1,SPACE=(CYL,(1,1))
//SYSIN DD *
SORT FIELDS=(5,2,CH,A,7,1,AQ,A,8,549,CH,A)
ALTSEQ CODE=(F080,F181,F282,F383,F484,F585,F686,F787,F888,F989,
C191,C292,C393,C494,C595,C696,C797,C898,C999,
D1A1,D2A2,D3A3,D4A4,D5A5,D6A6,D7A7,D8A8,D9A9,
E2B2,E3B3,E4B4,E5B5,E6B6,E7B7,E8B8,E9B9)
OPTION VLSHRT,DYNALLOC=(SYSDA,3)
/*

Delete the sort of the other data set – as I was using a dummy file

Run dbsync

I changed the bold lines below, the template JCL had

//OUTSCD1 DD DSN=your.dsname.for.outscd1,
// DISP=(NEW,CATLG),

so I changed

  • your.dsname.for to COLIN.RACF
  • NEW,CATLG to MOD,CATLG
  • Upper cased the changed lines using the ucc…ucc ISPF edit line command.
//DBSYNC EXEC PGM=IKJEFT01,REGION=5000K,DYNAMNBR=50,PARM='%DBSYNC' 
//SYSPRINT DD SYSOUT=*
//SYSTSPRT DD SYSOUT=*
//SYSTSIN DD DUMMY
//SYSEXEC DD DISP=SHR,DSN=COLIN.DBSYNC.REXX
//OPTIONS DD *
/* your options here
//INDD1 DD DISP=SHR,DSN=*.SORT1.SORTOUT
//INDD2 DD DUMMY
//OUTADD1 DD DSN=COLIN.RACF.ADDFILE1,
// DISP=(MOD,CATLG),
// UNIT=SYSDA,SPACE=(CYL,(25,25),RLSE),
// DCB=(RECFM=VB,LRECL=255,BLKSIZE=6400)
etc

The output was rexx commands in a file, such as

“rdefine MQCMDS CSQ9.** owner(IBMUSER ) uacc(CONTROL )
    audit(failures(READ )) level(00)”
“permit CSQ9.** class(MQCMDS) reset”
“rdefine MQQUEUE CSQ9.** owner(IBMUSER ) uacc(NONE )
     audit(failures(READ )) level(00) warning notify(IBMUSER )”
“permit CSQ9.** class(MQQUEUE) reset”
“rdefine MQCONN CSQ9.BATCH owner(IBMUSER ) uacc(CONTROL )
    audit(failures(READ )) level(00)”
“permit CSQ9.BATCH class(MQCONN) reset”
“rdefine MQCONN CSQ9.CHIN owner(IBMUSER ) uacc(READ )
    audit(failures(READ )) level(00)”
“permit CSQ9.BATCH class(MQCONN) id(IBMUSER ) access(ALTER )”
“permit CSQ9.BATCH class(MQCONN) id(START1 ) access(UPDATE )”
“permit CSQ9.CHIN class(MQCONN) id(IBMUSER ) access(ALTER )”

You edit and run the the Rexx exec to issue the commands.

Easy – it took me less than half an hour from start to finish.

Using RACF commands to extract and recreate the requests

I found that most people do not have access to an unloaded RACF database.  My normal userid does not have the authority to create the unloaded copy. 

I put an exec up on Github.   It issues a display command for each class in MQCMDS MXCMDS MQQUEUE MXQUEUE MXTOPIC MQADMIN MXADMIN MQCONN and formats it as a RDEFINE command, and then issues the permit command to give people access to it.  It writes the output in to the file being edited.

Use ISPF to edit a member where you want the output.

Make sure the rexx exec is in the SYSPROC or SYSEXEC concatenation, for example use ISRDDN to check.

Syntax

genclass <queuemanagername>

The output is like

 /* class:MXCMDS profile:MQPA class not found 
/* class:MXQUEUE profile:MQPA profile not found
/* class:MXTOPIC profile:MQPA profile not found
/* class:MXADMIN profile:MQPA profile not found
RDEFINE MQCONN -
MQPA.CICS -
- /* Create date 07/17/20
OWNER(ADCDA) -
- /* Last reference Date 07/17/20
- /* Last changed date 07/17/20
- /* Alter count 0
- /* Control count 0
- /* Update count 0
- /* Read count 0
UACC(NONE) -
LEVEL(0) -
- /* Global audit NONE
/* Permit MQPA.CICS CLASS(MQCONN ) RESET
Permit MQPA.CICS CLASS(MQCONN ) ID(ADCDA ) ACCESS(ALTER )
Permit MQPA.CICS CLASS(MQCONN ) ID(START1 ) ACCESS(READ )
/* class:MQCONN profile:MQPA.CICS profile not found

It includes a Permit… RESET if you want to remove all access

How to capture TSO command output into an ISPF edit session

One way of doing this, is to run a TSO command in batch, use SDSF to look at the output, and then use SE to edit the spool file.

This was too boring, so I wrote a quick macro ETSO

/* REXX /  parse source p1 p2 p3 p4 p5 p6 p7 env p9  
if env = "ISPF" then  
do    
  ADDRESS ISPEXEC    
  "ISREDIT MACRO (cmd ,REST)"  
end  
else  
do
    Say "This has to run from an ISPF edit session " 
   return 8;  
end  
/* prevent auto save … user has to explicitly save or cancel*/  
"ISREDIT autoSave off prompt"  
x = OUTTRAP('VAR.')  
address tso   cmd rest  
y = OUTTRAP('OFF')  
/* insert each record in turn last to first at the top of the file */ 
do i = var.0 to 1 by -1 
  "ISREDIT LINE_AFTER 0  = '"var.i"'" 
end 
exit

This macro needs to go into a dataset in SYSEXEC or SYSPROC concatenation – See TSO ISRDDN. I typed etso LU in an edit session, and it inserted the output of the TSO LU the top of the file.

For capturing long lines of output, you’ll need to be in a file with a long record length.

Neat!

To create a temporary dataset to run the above command I created another rexx called TEMPDSN

/* REXX */
parse source p1 p2 p3 p4 p5 p6 p7 env p9
if env = "ISPF" then
do
  ADDRESS ISPEXEC
  "ISREDIT MACRO (cmd ,REST)"
end
else
do
  Say "This has to run from an ISPF session "
  return 8;
end
u = userid()
address TSO "ALLOC DSN('"u".temptemp') space(1,1) tracks NEW DELETE ",
"BLKSIZE(1330) LRECL(133) recfm(F B) DDNAME(MYDD)"
if (rc <> 0) then exit
address ISPEXEC "EDIT DATASET('"u".temptemp')"
address TSO "FREE DDNAME(MYDD)"

You can then do even cleverer things by having TEMPDSN calling ETSO as a macro, and passing parameters using the ISPF variable support from the TSO command line.

Updated TEMPDSN, changes in bold.

/* REXX */ 
parse source p1 p2 p3 p4 p5 p6 p7 env p9
say "env" env
if env = "ISPF" then
do
ADDRESS ISPEXEC
"ISREDIT MACRO (cmd ,REST)"
end
else
do
Say "This has to run from an ISPF session "
return 8;
end
tempargs = ""
/* if we were passed any parameters */

parse arg args
if ( length(args) > 0) then
do
tempargs = args
"VPUT tempargs profile"
end

u = userid()
address TSO "ALLOC DSN('"u".temptemp') space(1,1) tracks NEW DELETE ",
"BLKSIZE(1330) LRECL(133) recfm(F B) DDNAME(MYDD)"
if (rc <> 0) then exit
address ISPEXEC "EDIT DATASET('"u".temptemp') MACRO(ETSO)"
address TSO "FREE DDNAME(MYDD)"

if ( length(tempargs) > 0) then
do
"VERASE tempargs profile"
end

and the macro ETSO has

/* REXX */ 
parse source p1 p2 p3 p4 p5 p6 p7 env p9
if env = "ISPF" then
do
ADDRESS ISPEXEC
"ISREDIT MACRO (cmd ,REST)"
end
else
do
Say "This has to run from an ISPF edit session "
return 8;
end
/* prevent auto save ... user has to explicitly save or cancel*/
"ISREDIT autoSave off prompt"
if (length(cmd) = 0) then
do
"VGET tempargs " PROFILE
if (rc = 0) then
cmd = tempargs
end
x = OUTTRAP('VAR.')
address tso cmd rest
y = OUTTRAP('OFF')
/* insert each record at the top of the file */
do i = var.0 to 1 by -1
"ISREDIT LINE_AFTER 0 = '"var.i"'"
end
EXIT

From ISPF 6, a TSO command line, I issued tempdsn racdcert list certauth  and it gave me the file with the output from the command, and the file has record length of 133.  If you try to  save the data,  the data set will be deleted on close.

 

 

Familiarity makes you blind to ugliness.

I’ve been using z/OS in some for over 40 years and I do not see many major problems with it (some other people may – but it cannot be very many).   It is the same with MQ on z/OS – but since I left IBM, and did not use MQ on z/OS for a year or so,  and I am now coming to it as a customer, I do see a few things that could be improved. When these products were developed, people were grateful for any solution which worked, rather than an elegant solution which did not work.  In general people do not like to be told their baby is ugly.

I have been programming in Python, and gone back to C to extract certificate information from RACF, and C looks really ugly in comparison.  It may be lack of vision from the people who managed the C language, or lack of vision from the people who set standards, but the result is ugly.

My little problem (well, one of them)

I have decoded a string returned from RACF and it is in an ugly structure which uses

 typedef struct _attribute {     
oid identifier;
buffer * pValue ;
int lName
} x509_rdn_attribute;

typedef enum {
unknown = 0,
name = 1,
surname = 2,
givenName = 3,
initials = 4,
commonName = 6
....

} oid ;

My problem is that I have an oid identifier of 6 – how to I get map this to the string “commonName”.

I could use a big switch statement, but it means I have to preprocess the list, or type it all in.  I then need to keep an eye on the provided definitions, and update my file if it changes.  The product could provide a function to do the work for me.  A good start – but perhaps the wrong answer.

If I ruled the world…

As well as “Every day would be the first day of Spring”  I would have the C language provide a function for each enum so mune_oid(2) returns “surname”;

I wrote some code to interpret distributed MQ trace, and I had to do this reverse mapping for many field types.  In the end, I wrote some Python script which took the CMQC.h header file and transformed each enum into a function which did the switch, and return the symbolic name from the number.

I came up with the idea of using definitions like colin.h with clever macros,

TYPEDEF_ENUM (_attribute)
  ENUM(unknown,0,descriptive comment);
ENUM(name,1,persons name);
...
END_TYPEDEF_ENUM(OID)

For normal usage I would define macros

#if ndef TYPEDEF_ENUM
#define  TYPEDEF_ENUM (a)  typedef struct a {
#define  ENUM(a,b,c)  a=b,/* c*/
#define END_TYPEDEF_ENUM(a)  } a;
#endif
#include <colin.h>

For the lookup processing the macros could be

#define  TYPEDEF_ENUM (a)  char * lookup_a(int  in);switch(in){
#define  ENUM(a,b,c)  case b return #a; /* c */
#define END_TYPEDEF_ENUM(a)  } return "Unknown";
#include <colin.h>

but this solution means I have to include colin.h more than once, which may cause duplicate definitions.

My solution looks uglier than the problem, so I think I’ll just stick to my Python solution and creating a mune… function.