0/10 for JCL 101 homework

When I worked with customers, you could often tell people who were not experienced, and setting up subsystems and applications

“My first JCL”

//STEPLIB   DD DSN=ABC120.EG1.SABCAUTH,DISP=SHR   /* USER LIB  */ 
// DD DSN=ABC120.SABCANLE,DISP=SHR
// DD DSN=ABC120.SABCAUTH,DISP=SHR
// DD DSN=CEE.SCEERUN,DISP=SHR
...
//FILE1 DD DSN=ABC120.EG1.FILE01,DISP=SHR
//FILE2 DD DSN=ABC120.EG1.FILE02,DISP=SHR

Where the FILE datasets contain user data.

All the ABC120.* datasets were shipped on a volume ABCV12. When the system was updated to a newer service level, the volume ABCV12 was refreshed and put on all systems.

What could go wrong?

The first problem – whoops

With the volume ABCV12 being replaced, all the user data is replaced – Whoops.

Solution: You need to keep your libraries and user data separate. Product libraries on ABCV12, and user data on USRxxx. You might want to make the volume for the product libraries (ABCV12) read only.

The second problem – what is this?

Once you have fixed the problem and separated the data onto different volumes you upgrade to the next version ABCV13.

Now your JCL is

//STEPLIB   DD DSN=ABC130.EG1.SABCAUTH,DISP=SHR   /* USER LIB  */ 
// DD DSN=ABC130.SABCANLE,DISP=SHR
// DD DSN=ABC130.SABCAUTH,DISP=SHR
// DD DSN=CEE.SCEERUN,DISP=SHR
...
//FILE1 DD DSN=ABC120.EG1.FILE01,DISP=SHR
//FILE2 DD DSN=ABC120.EG1.FILE02,DISP=SHR

People looking at this will be confused and will ask what release are we running, it looks like 1.3, but the data sets say 1.2

Solution: Use a name like ABCUSER.EG1.FILE01 instead of ABC120.EG1.FILE01. These names never change when you migrate to a newer release.

You can enforce which HLQ’s can use which volumes using HSM rules.

You want to do a test upgrade to the next release: – how much work is it!!

Over this weekend, you want to test out the next release and go from release 1.3 to 1.4. You look at all your JCL, use SRCHFOR ABC130. and find all the places where you have ABC130 (wow, lots of places). You will have to change the JCL to run the subsystem at the start of the test, run your test, and change it all back ready for production next week. With all the changes you need to be careful not to make a mistake. (And of course all the change request paper work needs to be approved)

A better way is to use dataset aliases.

DEFINE ALIAS(NAME(ABC.SABCAUTH) RELATE(ABC120.SABCAUTH))
DEFINE ALIAS(NAME(ABC.SABCANLE) RELATE(ABC120.SABCANLE))
etc

So when you use ABC.SABCAUTH under the covers it uses ABC920.SABCAUTH

Your JCL now looks like

//STEPLIB   DD DSN=ABCUSR.EG1.SABCAUTH,DISP=SHR   /* USER LIB  */ 
// DD DSN=ABC.SABCANLE,DISP=SHR
// DD DSN=ABC.SABCAUTH,DISP=SHR
// DD DSN=CEE.SCEERUN,DISP=SHR
...
//FILE1 DD DSN=ABCUSER.EG1.FILE01,DISP=SHR
//FILE2 DD DSN=ABCUSER.EG1.FILE02,DISP=SHR

You do not need to worry about APF authorising the CSQ.SCSQAUTH. The dataset which is checked is the dataset the alias points to.

To test the next release this weekend, you delete the aliases and define the new ones. You do not need to change your JCL libraries. You run your tests and at the end delete the new aliases and redefine the old one. The JCL will fit on one screen is much easier than changing all your JCL libraries, and less error prone. (And someone else can review the JCL before you make the changes)

An alternative way

You could use system symbolics EGHLQ = ABC120, and refer to it as in //STEPLIB DSN=&EGHLQ..SABCAUTH

Hands up…

If you are guilty of the problems raised in the blog post, you can get round them.

You can implement the alias for the product libraries, and gradually change all references to use the aliases.

Where you have

//FILE1     DD DSN=ABC120.EG1.FILE01,DISP=SHR 
//FILE2 DD DSN=ABC120.EG1.FILE02,DISP=SHR

You can define an alias for these and use DSN=ABCUSER.EG1.FILE01. Once you’ve made the change your friends will appreciate the clarity, and the only people who know about the mess you made are the storage administrators.

In the long term you may be able to fix it by copying the datasets to new ones with the proper name, and deleting the old ones.

Defining dataset aliases is good, but take care

I can define a dataset alias CSQ.SCSQAUTH which points to dataset CSQ920.SCSQAUTH, and use CSQ,SCSQAUTH in my JCL. When I want to change to the next level of code, I just change CSQ.SCSQAUTH to point to CSQ940.SCSQAUTH, and my JCL just works unchanged! Every one should do this.

Background

As part of z/OS catalogs, you can define aliases to keep user information out of the master catalog. For example point a high level qualifier to a catalog

DEFINE ALIAS (NAME(CSQ920) RELATE('USERCAT.Z31B.PRODS')
DEFINE ALIAS (NAME(CSQ) RELATE('USERCAT.Z31B.PRODS')

If I now define a dataset CSQ.COLIN, the information about this data set will be store in the catalog USERCAT.Z31B.PRODS. When the dataset is used, the name is looked up in the master catalog, which says go and use catalog USERCAT.Z31B.PRODS.

Dataset level aliases

I can also define an alias at the data set level. For example CSQ.SCSQAUTH is an alias of CSQ920.SCSQAUTH. I can then use CSQ.SCSQAUTH in my JCL instead of CSQ920.SCSQAUTH

When the next version of code is available, I can change the alias of CSQ.SCSQAUTH to point to CSQ940.SCSQAUTH and my JCL will work as before. I do not need to go through my JCL libraries and replacing the old with new. This is great – every one should use it.

Create the alias using

DEFINE ALIAS(NAME(CSQ.SCSQAUTH) RELATE(CSQ920.CSQ9.SCSQAUTH))

For this to work the data set alias CSQ.SCSQAUTH must be in the same catalog as the data set it references, so both name and target need to be in USERCAT.Z31B.PRODS.

If I use ISPF 3.4 with CSQ.SCSQAUTH it gives volume of *ALIAS. If I browse the dataset it shows data set CSQ920.CSQ9.SCSQAUTH.

You do not need to worry about APF authorisation because controls are on the target data set CSQ920.CSQ9.SCSQAUTH.

What problems did I have?

I had a frustrating hour or so until I got it to work.

I had a different user catalog for the CSQ HLQ.

DEFINE ALIAS (NAME(CSQ920) RELATE('USERCAT.Z31B.PRODS')
DEFINE ALIAS (NAME(CSQ) RELATE('USERCAT.COLINS')
DEFINE ALIAS(NAME(CSQ.SCSQAUTH) RELATE(CSQ920.CSQ9.SCSQAUTH))

The above statements worked successfully, but ISPF 3.4 did not show CSQ.SCSQAUTH.

The commands

LISTCAT CATALOG('CATALOG.Z31B.MASTER') ALIAS
LISTCAT CATALOG('USERCAT.COLINS') ALIAS

did not show any entries for CSQ.SCSQAUTH.
If I tried to redefine the data set alias, it said DUPLICATE entry.

I had to use

LISTCAT CATALOG('USERCAT.Z31B.PRODS') ALIAS

and there was my CSQ.SCSQAUTH.

The documentation says

If the entryname in the RELATE parameter is non-VSAM, choose an aliasname in the NAME parameter that resolves to the same catalog as the entryname.

which I missed the first time round.

I had to delete the dataset alias from the catalog for the target dataset

DELETE CSQ.SCSQAUTH  CATALOG('USERCAT.Z31B.PRODS') 

I then deleted the alias for CSQ, redefined it to point to the correct user catalog, redefined the data set alias and it worked.

DELETE    CSQ          ALIAS 
DEFINE ALIAS (NAME(CSQ) RELATE('USERCAT.Z31B.PRODS')
DEFINE ALIAS(NAME(CSQ.SCSQAUTH) RELATE(CSQ920.CSQ9.SCSQAUTH))

Getting sshfs to work to z/OS

You can “mount” a remote file system as a local directory over sshfs. (ssh file system).

Getting this working was a challenge. I do not know if it is an FTP problem, or a z/OS problem

The command, from Linux, is

sshfs colin@10.1.1.2: ~/mountpoint

where mountpoint is a local directory, and my z/OS system is on 10.1.1.2

This flows into the SSH daemon (SSHD) on z/OS which handles the handshake and encryption.

For the IBM provided SSHD, the /etc/ssh/sshd_config config file has

Subsystem sftp /usr/lib/ssh/sftp-server 

Where /usr/lib/ssh/sftp-server is the executable to do the work. The IBM supplied object is a load module. You could replace this with a script or other module.

Once the session has been established you can access the files, as if they were on the local system.

What is running on z/OS?

If you use the ps -ef command it displays

     UID        PID       PPID  CMD                                               
OMVSKERN 50397264 67174474 /usr/sbin/sshd -f /etc/ssh/sshd_config -R
COLIN 67174482 50397264 /usr/sbin/sshd -f /etc/ssh/sshd_config -R
COLIN 50397267 67174482 sh -c /usr/lib/ssh/sftp-server
COLIN 83951719 50397267 /usr/lib/ssh/sftp-server

This shows the calling chain – the first (SSHD) is at the top, and the last, /usr/lib/ssh/sftp-server, is doing the work to process the files

The shell used depends on the OMVS(PROGRAM()) defined for the userid.

When did sshfs work?

If I had OMVS(PROGRAM(‘/bin/sh’)) then the sshfs worked ok, I could used the files as expected.

If the program was for bash or for zhs, then the data as seen from Linux was in EBCDIC and so was not usable.

So how do I use zsh or bash?

I got round this problem…

I specified the userid as having OMVS(PROGRAM(‘/bin/sh’)), and changed to use the bash shell in the logon script

If I logon with ssh colin@10.1.1.2 then there are environment variables in /etc/profile and ~/.profile.

SSH_CLIENT="10.1.0.2 44898 22"
SSH_CONNECTION="10.1.0.2 44898 10.1.1.2 22"
SSH_TTY="/dev/ttyp0000"

In my ~/.profile I’ve put

if [[ ! -z "$SSH_CLIENT"  ]] 
then
set -x
# SSH_CLIENT has a value ... so an SSH terminal
# bash="/usr/lpp/Rocket/rsusr/ported/bin/bash"
bash="/u/zopen/usr/local/bin/bash"
echo "shell $SHELL bash $bash"
if [[ $SHELL != $bash ]]
then
echo "using the bash shell"
export SHELL="$bash"
exec "$bash" # replace the current script with bash
# any code after the exec is not executed
fi
fi

which says. If the $SSH_CLIENT variable is not the empty string, (the session came in over an ssh connection) then invoke $bash, and it replaces the current environment with the /u/zopen/usr/local/bin/bash.

With this I could use both sshfs for remote file access, and ssh for terminal access.

If there are better ways of doing this, please let me know

OMVS is the way ahead!

If you have any suggestions in this area – please let me know!

I recently found this article which covers the same ground with more/better explanations.

With lots of development of open source tools for OMVS on z/OS, I thought I would try it out. I’ve been amazed at how good it is. This blog post is “one liners” to help people get started and over the initial hump to moving towards this. I’ll add more blog posts as I go further down the path.

My original task was to develop some Python scripts to extract profiles from RACF. I use Ubuntu Linux on my laptop.

I used to use OMVS from ISPF, because I thought the interface through SSH was poor in comparison. I now think that the OMVS interace is limited compare to the SSH interface, because of all of the enhancements and packages available to it.

One example is I use the “less” command on Linux very frequently. This does not work with ISPF OMVS, but it is available through SSH.

See Setting Up the z/OS UNIX Shell (Correctly & Completely) for an excellent article on moving to OMVS though SSH.

Editing is easy

  • Create a mountpoint on your laptop.
  • use sshfs colin@10.1.1.2: ~/mountpoint
  • use vscode to edit the files. This is a very popular editor/IDE.
  • you have to be careful of tagging the file. Create a file using touch, then use “chtag -t -c ISO8859-1 filename “, and then edit it. It is editable in vscode and ISPF (but not at the same time of course). Yesterday the files needed the tag ISO8859-1, today they only work without the tag! ( chtag -r newtry.py) – I do not know what has changed!

I used other tools such as diff, from my laptop to files in my z/OS Home directory.

You can install packages like zowe on z/OS and use vscode to edit files and datasets from lists, to issue z/OS commands, and look at spool. This is a heavy weight package, but is very popular. The editing via sshfs is very easy.

Install zopen

zopen is a set of packages ported from open source. It was easy to install.

I used zopen install … to install packages. I used

  • zopen install which, this tells you the full path of a command
  • zopen install less, less is a fast display of a file in a terminal, with search capability. It is often faster than an editor/browser. Less is a more advanced version of the more command. The more command allows you to page through a file.

Use the bash or zsh shell

In my OMVS userid profile I set PROGRAM(/u/zopen/usr/local/bin/bash) This version of bash has proper key support. For example delete really does delete characters. For the Rocket port of bash the delete key is a dead key.

If your delete key does not work on the command line

Use the zopen bash shell, PROGRAM(/u/zopen/usr/local/bin/bash) or the zsh shell PROGRAM(/bin/zsh).

Logon

Use a command like ssh colin@10.1.1.2 to get to z/OS. You can configure SSH and transfer a key file to z/OS, so you can logon without a password.

Using the right shell

The default borne shell is so back level. You should use bash or zsh.

bash

You should use bash from zopen. Use PROGRAM(/u/zopen/usr/local/bin/bash) in your RACF profile. Use bash from zopen because this has more function than Rocket’s bash – and the delete key works as expected.

zsh

Many people recommend zsh. Use program(/bin/zsh) in your RACF profile to use it. See Oh My Zsh or: How I Learned to Stop Worrying and Love the Shell to a good introduction to zsh extensions. For example there are sudo, and a git interface.

Issuing TSO commands

You can use the tsocmd to issue a command and get a response back

tsocmd "LU COLIN" |less

you can then page through the output.

Command complete

Bash and zsh have command completion.
if you type zopen [tab] [tab] it will display the options available for the zopen command

You can use ls [tab] [tab] to display all the files in the current directory

RACF

I’ve been using the Python interface (pysear) to RACF to display information, and manage profiles. It’s great and very flexible.

SDSF

There is a python interface to sdsf, available in z/OS 3.1, but it is not available in the 3.1 images I have.

My ~/.profile

export _BPXK_AUTOCVT=ON
export _CEE_RUNOPTS="FILETAG(AUTOCVT,AUTOTAG) POSIX(ON)"
export _TAG_REDIR_ERR=txt
export _TAG_REDIR_IN=txt
export _TAG_REDIR_OUT=txt
export CXX="/bin/xlclang++"
export CC="/bin/xlC"
export CC="/bin/xlclang++"
export PATH=${PATH}:/bin
export PATH=${PATH}:/u/colin/.local/bin
export PATH=${PATH}:/u/tmp/zowep/bin/
export PATH=${PATH}:/usr/lpp/IBM/cyp/v3r12/pyz/bin
export LIBPATH=${LIBPATH}:/usr/lpp/IBM/cyp/v3r12/pyz/lib
. /u/zopen//etc/zopen-config --override-zos-tools
# if Ive come in from SSH....
if [[ -z "$SSH_CLIENT" ]]
then
# dummy
xxx="$SSH_CLIENT"
else
set -x
zopenkw="alt audit build clean compare-versions compute-builddeps \
config-helper create-cicd-job create-repo diagnostics \
generate help2man info init install md2man migrate-buildenv \
migrate-groovy promote publish query remove split-patch \
Cupdate-cacert usage version whichproject "
echo $kw
complete -W "$zopenkw " zopen
fi

That’s as far as I’ve got.

Keeping people out of the master catalog.

I had written Here’s another nice mess I’ve gotten into! My master catalog is full of junk which describes what I did once I found my master catalog was full of stuff which should not be there.

I’ve now got round to finding out how to stop people from putting rubbish there in the first place!

See One minute mvs: catalogs and datasets for an introduction to master and user catalogs.

The master catalog should have some system datasets, aliases, and not much else.

An alias says for this high level qualifier (COLIN) go to the usercatalog(‘USER.COLIN.CATALOG).

A catalog is a dataset, and you can use a RACF profile to protect it, so only authorised people can update it. Typically, when you define a userid or a high level qualifier, you should also define an alias for that userid (or HLQ), pointing to a user catalog.

To keep user data out of the master catalog you need

  • one or more user catalogs – for example do you give each user their own catalog, have one per deparment, or one for all users. These catalogs are typically defined by storage administrators (or automation set up by storage administrators).
  • an alias for each userid and the name of the catalog that userid should use. These aliases are set up by people (or automation) which defines userids.
  • an alias for each dataset High Level Qualifier (HLQ) and the name of the catalog that the HLQ should use. These aliases are set up by people (or automation) which defines the high level qualifiers. An example HLQ is CEE, or DB2.

If you migrate to a system with a new master catalog (for example with zPDT or zD&T), you will need to import the usercatalogs into the master catalog, and redefine the aliases.

Import a user catalog

When I tried to import a user catalog into the master catalog, I got

 ICH408I USER(COLIN   ) GROUP(TEST    ) NAME(CCPAICE             ) 
CATALOG.Z31B.MASTER CL(DATASET ) VOL(B3SYS1)
INSUFFICIENT ACCESS AUTHORITY
FROM CATALOG.Z31B.* (G)
ACCESS INTENT(UPDATE ) ACCESS ALLOWED(READ )

so any userid importing or exporting a catalog needs update access to the catalog.

Defining and deleting an alias

Having set up RACF profiles, and given my userid COLIN only READ access to the master catalog, I found my userid could still define and delete aliases. It took a couple of days to find out why.

  • If a userid has ALTER access to CLASS(FACILITY) STGADMIN.IGG.DEFDEL.UALIAS the userid can define and delete ALIAS profiles. This overrides dataset access checks.
  • If a userid does not have ALTER access to the profile, then normal dataset checks are made.

What I learned…

  • My userid had “special”. As the documentation says The RACF SPECIAL attribute allows you to update any profile in the RACF database. This meant I could display and update any profile.
  • There is a profile class(facility) STGADMIN.IGG.DEFDEL.UALIAS which allows you to define and delete user aliases in the (master) catalog
  • If my userid had SPECIAL, or the userid was in group SYS1 I could issue the command

rlist facility STGADMIN.IGG.DEFDEL.UALIAS

and it gave

CLASS      NAME
----- ----
FACILITY STGADMIN.IGG.* (G)
LEVEL OWNER UNIVERSAL ACCESS YOUR ACCESS WARNING
----- -------- ---------------- ----------- -------
00 IBMUSER NONE ALTER NO

USER ACCESS
---- ------
SYS1 ALTER
IBMUSER ALTER

If my userid did not have special and was not in SYS1, I got

ICH13002I NOT AUTHORIZED TO LIST STGADMIN.IGG.*

When my userid was connected to the group SYS1, it got the ALTER access to the profile, and overrode the RACF profiles for the catalog data set.

Which is my master catalog?

At IPL, it reports

IEA370I MASTER CATALOG SELECTED IS CATALOG.Z31B.MASTER 

You can use the operator command D IPLINFO

SYSTEM IPLED AT 07.26.58 ON 01/02/2026                                              
RELEASE z/OS 03.01.00 LICENSE = z/OS
USED LOADCP IN SYS1.IPLPARM ON 00ADF

My load parm member, SYS1.IPLPARM(LOADCP) has

IODF     99 SYS1 
INITSQA 0000M 0008M
SYSCAT B3SYS1113CCATALOG.Z31B.MASTER
SYSPARM CP
IEASYM (00,CP)

The catalog is called CATALOG.Z31B.MASTER and is on volume B3SYS1

Does a RACF profile exist?

See What RACF profile is used for a data set?

tso listdsd dataset(‘CATALOG.Z31B.MASTER’)
tso listdsd dataset(‘CATALOG.Z31B.MASTER’) generic

Showed there was no profile defined.

Create the profile

* DELDSD  'CATALOG.Z31B.*'                                   
ADDSD 'CATALOG.Z31B.*' UACC(READ)
PERMIT 'CATALOG.Z31B.*' ID(IBMUSER ) ACCESS(CONTROL)
PERMIT 'CATALOG.Z31B.*' ID(COLIN ) ACCESS(READ )

When I tried to use the master catalog from a general userid the request failed.

DELETE TEST  ALIAS                                                                                                 
IDC3018I SECURITY VERIFICATION FAILED+
IDC3009I ** VSAM CATALOG RETURN CODE IS 56 - REASON CODE IS IGG0CLFT-6
IDC0551I ** ENTRY COLIN.TEST NOT DELETED
IDC0014I LASTCC=8

Hmm that’s strange

With userid COLIN, I could still issue commands, such as DELETE TEST ALIAS, even though I had given it only read access.
If I displayed the profile from userid COLIN it had

 INFORMATION FOR DATASET CATALOG.Z31B.* (G)

LEVEL OWNER UNIVERSAL ACCESS WARNING ERASE
----- -------- ---------------- ------- -----
00 COLIN READ NO NO
YOUR ACCESS CREATION GROUP DATASET TYPE
----------- -------------- ------------
READ SYS1 NON-VSAM

This had me confused for several hours. That’s when I found out about the presence of the STGADMIN.IGG.DEFDEL.UALIAS profile.

Summary

You want users (non system) datasets to be in a user catalog, rather than the master catalog. This makes migrating to a new master catalog much easier, You just have to import the catalogs, and redefine the aliases.

You need to set up

  • one (or more) user catalogs
  • aliases to connect the userid (and High Level Qualifiers) to a catalog
  • give authorised used alter access to class(facility) STGADMIN.IGG.DEFDEL.UALIAS to allow them to maintain aliases.
  • define a RACF profile for the master catalog and make the UACC(READ).
  • for those people who need to need to define, import or export catalogs, they need update access to the master catalog dataset.

What RACF profile is used for a data set?

I was trying to find out why I had write access to a data set, when I was only expecting read access.

You can search for profiles

SEARCH CLASS(DATASET) MASK(COLIN)

gave me

COLIN.ENCR.* (G)
COLIN.ENCR.** (G)
COLIN.ENCRCLEA.* (G)
COLIN.ENCRDH.* (G)
COLIN.ENCR2.* (G)
COLIN.ENCR3.* (G)
COLIN.MQ944.** (G)

The command (CLASS(DATASET) is the default, so can be omitted)

SEARCH MASK(COLIN,44)

gave me profiles starting with COLIN containing 44

COLIN.MQ944.** (G)

List a profile LISTDSD

tso listdsd dataset('COLIN.MQ944.**')

gave

INFORMATION FOR DATASET COLIN.MQ944.** (G)

LEVEL OWNER UNIVERSAL ACCESS WARNING ERASE
----- -------- ---------------- ------- -----
00 COLIN NONE NO NO
b...
YOUR ACCESS CREATION GROUP DATASET TYPE
----------- -------------- ------------
ALTER SYS1 NON-VSAM
tso listdsd dataset('COLIN.MQ944.SOURCE')

gave

ICH35003I NO RACF DESCRIPTION FOUND FOR COLIN.MQ944.SOURCE

You need the generic option

tso listdsd dataset('COLIN.MQ944.SOURCE') generic

gave

INFORMATION FOR DATASET COLIN.MQ944.** (G)

LEVEL OWNER UNIVERSAL ACCESS WARNING ERASE
----- -------- ---------------- ------- -----
00 COLIN NONE NO NO
...
YOUR ACCESS CREATION GROUP DATASET TYPE
----------- -------------- ------------
ALTER SYS1 NON-VSAM

This says

If I was to use data set ‘COLIN.MQ944.SOURCE’, RACF would check profile COLIN.MQ944.**, and I would have ALTER access to it.

What is RACF GLOBAL….

With RACF you can define a profile and give userids access to it. You can also define a global profile for high used datasets, so the profile is cached, and no I/O is needed to the RACF dataset.

Defined a normal profile

 ADDSD  'COLIN.Z31B.*' UACC(READ)                         
PERMIT 'COLIN.Z31B.*' ID(IBMUSER,COLIN) ACCESS(CONTROL)

You can list it

LISTDSD DATASET('COLIN.Z31B.*') ALL

and delete it

DELDSD DATASET('COLIN.Z31B.*') 

For some resources used very frequently, you can cache definitions in memory. These are called GLOBAL definitions. When a check is made for a userid to access a resource, if the definition is a global definition, then there should be no RACF database I/O, and should be fast.

Define a global resource

You need to set up the global resource before you can use it. See the IBM documentation.

Example 1 contains

SETROPTS GLOBAL(DATASET)
RDEFINE GLOBAL DATASET
SETROPTS GLOBAL(DATASET) REFRESH

and

RALTER   GLOBAL DATASET ADDMEM('SYS1.HELP'/READ)
ADDSD 'SYS1.HELP' UACC(READ)
SETROPTS GLOBAL(DATASET) REFRESH

to define a resource. It gives a default of read access to the data set SYS1.HELP.

You can display the contents of the global data set class

rlist global dataset

which gives

CLASS      NAME
----- ----
GLOBAL DATASET
...
RESOURCES IN GROUP
--------- -- -----
SYS1.HELP/READ
...

You can delete a global profile

RALTER   GLOBAL DATASET DELMEM('SYS1.HELP'/READ)
SETROPTS GLOBAL(DATASET)

You can remove the global dataset class if there are no elements in the glas

RDElete  GLOBAL DATASET
SETROPTS NOGLOBAL(DATASET)
SETROPTS GLOBAL(DATASET) REFRESH

If you now list the global profile

rlist global dataset

gives

 ICH13003I DATASET NOT FOUND

I’m guessing that if you want READ access to the SYS1.HELP data set, the entry in the GLOBAL DATASET will be found. If you want UPDATE access to the SYS1.HELP data set, because there is no entry in the GLOBAL DATASET, checking will fall through to the normal profiles defines like ADDSD.

You do not need to configure the GLOBAL DATASET, but it can give performance benefits, if you are on a heavily used system. It is not enabled on my one person zD&Y system.

Beware

In the documentation it also defines a “normal” profile like “ADDSD
‘SYS1.HELP’ UACC(READ)”. I’m guessing that this is a fall back if someone deactivates the global dataset profiles.

So you should read the documentation and follow its instructions.

Getting table data out of html – successfully

A couple of times I’ve wanted to get information from documentation into my program for example, from

I want to extract

  • “cics:operator_class” : "set","add","remove","delete"
  • “cics:operator_classes”: N/A

then extract those which have N/A (or those with “set” etc).

Background

From the picture you can see the HTML table is not simple, it has a coloured background, some text is in one font, and other text is in a different font.

The table is not like

<table>
<tr><td>"cics:operator_classes"</td>...<td>N/A</td></tr>
</table>

and so relatively easy to parse.

It will be more like one long string containing

<td headers="ubase__tablebasesegment__entry__39 ubase__tablebasesegment__entry__2 " 
class="tdleft">
&nbsp;
</td>
<td headers="ubase__tablebasesegment__entry__39 ubase__tablebasesegment__entry__3 "
class="tdleft">
'Y'
</td>

Where &nbsp. is a non blank space.

Getting the source

Some browsers allow you do save the source of a page, and some do not.
I use Chrome to display and save the page.

You can use Python facilities to capture a web page.

My first attempt with Python

For me, the obvious approach was to use Python to process it. Unfortunately it complained about some of the HTML, so I spent some time using Linux utilities to remove the HTML causing problems. This got more and more complex, so I gave up. See Getting table data out of html – unsuccessfully.

Using Python again

I found Python has different parsers for HTML (and XML), and there was a better one than the one I had been using. The BeautifulSoup parser handled the complex HTML with no problems.

My entire program was (it is very short!)

from lxml import etree
from bs4 import BeautifulSoup

utf8_parser = etree.XMLParser(encoding='utf-8',recover=True)

# read the data from the file
file="/home/colin/Downloads/Dataset SEAR.html"
with open(file,"r") as myfile:
    data=myfile.read()

soup = BeautifulSoup(data,  'html.parser')
#nonBreakSpace = u'\xa0'
tables = soup.find_all(['table'])
for table in tables:
    tr = table.find_all("tr")
    for t in tr:
        line = list(t)
        if len(line) == 11:            
            print(line[1].get_text().strip(),line[7].get_text().strip())
        else: 
            print("len:",len(line),line)
quit()  

This does the following

  • file =… with open… data =… reads the data from a file. You could always use a URL and read directly from the internet.
  • tables = soup.find_all([‘table’]) extract the data within the specified tags. That is all the data between <table…>…</table> tags.
  • for table in tables: for each table in turn (it is lucky we do not have nested tables)
  • tr = table.find_all(“tr”) extract all the rows within the current table.
  • for t in tr: for each row
  • line = list(t) return all of the fields as a list

the variable line has fields like

' ', 
<td><code class="language-plaintext highlighter-rouge">"tme:roles"</code></td>,
' ',
<td><code class="language-plaintext highlighter-rouge">roles</code></td>,
' ',
<td><code class="language-plaintext highlighter-rouge">string</code></td>,
' ',
<td>N/A</td>,
' ',
<td><code class="language-plaintext highlighter-rouge">"extract"</code></td>,
' '
  • print(line[1].get_text().strip(),… takes the second line, and extracts the value from it ignoring any tags (“tme:roles”) and removes any leading or trailing blanks and prints it.
  • print(…line[7].get_text().strip()) takes the line, extracts the value (N/A), removes any leading or trailing blanks, and prints it.

This produced a list like

  • “base:global_auditing” N/A
  • “base:security_label” “set””delete”
  • “base:security_level” “set””delete

I was only interested in those with N/A, so I used

python3 ccpsear.py |grep N/A | sed 's.N/A.,.g '> mylist.py

which selected those with N/A, changed N/A to “,” and created a file mylist.py

Note:Some tables have non blank space in tables to represent and empty cell. These sometimes caused problems, so I had code to handle this.

nonBreakSpace = u'\xa0'
for each field:
if value == " ":
continue
if value == nonBreakSpace:
continue

Getting table data out of html – unsuccessfully

A couple of times I’ve wanted to get information from documentation into my program for example

I want to extract out

  • “cics:operator_class” : "set","add","remove","delete"
  • “cics:operator_classes: N/A

then extract those which have N/A (or those with “set” etc).

Background

From the picture you can see the HTML table is not simple, it has a coloured background, some text is in one font, and other text is in a different font.

The table will not just be

<table>
<tr><td>"cics:operator_classes"</td>...<td>N/A</td></tr>
</table>

and so relatively easy to parse. My first thoughts were to use grep to extract the rows, then extract the data.

It will be more like one long string containing

<td>
<code class="language-plaintext highlighter-rouge">"cics:operator_classes"</code>
</td>

which means grep will not work directly with the data.

Getting the source

Some browsers allow you do save the source of a page, and some do not.
I use Chrome to display and save the page.

Parsing using Linux tools

For me the obvious approach was to use Python to process it. Unfortunately it complained about some of the HTML, so I spent some time trying to remove the stuff I didn’t need. This proved to be an interesting diversion to a dead end. In the end Python was the right answer, see Getting table data out of html – successfully.

Linux utilities only go so far

One of the pages I wanted to process was over 500KB, and I wanted just a small part of it. Looking at the data, it was one long string with no new lines so was very difficult to display.

Splitting the text up

The first thing I did was to extract the <table>… </table> information and ignore the rest.

I found the Unix command sed (stream editor) very useful

sed 's!<table!\n<table!g' racf.html 

This reads from the file racf.html and changes <table…. to \n<table... so adding a new line to the before each <table. When you edit or display the file, the <table... are at the start of a line.

I fed the output of this into another sed command

sed 's!/table>!/table>\n!g' 

which puts a line end after the end of every /table> tag. The data now looks like

<DOCTYPE html><html lang=”en-US”><meta http-equiv=”Content-Type” content=”text/..
<table…
</table>
…..

I then used sed to include lines between lines starting with <table> and </table> where they were at the start of the line

sed -n  '/<table/, /<\/table>/p' > racf1.html 

The final command was

sed 's!<table!\n<table!g' racf.html |sed 's!/table>!/table>\n!g'  | sed -n  '/<table/, /<\/table>/p' > racf1.html 

Using regular expressions

You can use regular expressions and say remove data between <colgroup to /colgroup>.

This is where it starts to get hard

If there was a string and you process it to remove data between <colgroup to /colgroup>, then

my <colgroup> abc</colgroup> and <colgroup>xyz</colgroup> and the rest

then some tools will give

my and the rest

which is called greedy – it removes as much as possible, from the first <colgroup to the last /colgroup> to meet the instructions.

I had to use perl’s regular expressions which could be configured as non greedy

cat racf1.html |perl -p -e 's,<colgroup.*?/colgroup>,,g'

and produced

my and and the rest

which is what I was expecting.
The commands to extract the few fields from the KB of data were getting more and more complex, it would have been quicker to extract the fields of interest by hand.
I backtracked and went back to my Python program. Which was successful.

How do I use updated libraries in the linklst?… and then whoops

I have an updated C run time library. How do I test it on z/OS, bearing in mind the libraries are in the linklist (dataset available to every one), and I want to use them from Unix.

You can refresh the Linklist, by creating a new definition using console commands, and activating it.

Because you may need to update more that one library as part of the update, you update the definitions, then activate them. If you update linklist one data set at a time, you could get inconsistent libraries in the link list for a short period.

You have one active (current list) and can have multiple other list definitions.

You can

  • copy the current definitions
  • add data sets
  • remove data sets
  • activate a definition

To display what is in the current link list you can use the SDSF option LNK, or use the operator command D PROG,LNKLST

D PROG,LNKLST                               
CSV470I 12.06.05 LNKLST DISPLAY 605
LNKLST SET LNKLST00 LNKAUTH=LNKLST
ENTRY APF VOLUME DSNAME
1 A B3RES1 SYS1.LINKLIB
2 A B3RES1 SYS1.MIGLIB
3 A B3RES1 SYS1.CSSLIB
4 A B3RES1 SYS1.SIEALNKE
5 A B3RES1 SYS1.SIEAMIGE
6 A B3RES1 SYS1.SHASLNKE
7 A B3RES1 SYS1.SERBLNKE
8 A B3RES1 SYS1.SGRBLINK
9 A B3RES1 SYS1.SHASMIG
10 B3RES1 SYS1.SCBDHENU
...

Note: Some of these are APF authorised.

You can display what lists are active using the command

D PROG,LNKLST,NAMES

This gave

CSV472I 19.25.26 LNKLST DISPLAY 848             
LNKLST SET LNKLST SET LNKLST SET LNKLST SET
LNKLST00 COLIN

To create a new definition called MY based on my current defintion; and activate it

SETPROG LNKLST,DEFINE,NAME=MY,COPYFROM=CURRENT
SETPROG LNKLST,ADD,NAME=MY,DSN=CEE.SCEERUN.NEW
SETPROG LNKLST,delete,NAME=MY,dsname=CEE.SCEERUN
SETPROG LNKLST,ADD,NAME=MY,DSN=CEE.SCEERUN2.NEW
SETPROG LNKLST,delete,NAME=MY,dsname=CEE.SCEERUN2
SETPROG LNKLST,ACTIVATE,NAME=MY

You can put these statements in a parmlib member PROGxx see here, and then activate them using the operator command

t prog=xx

Or you can issue these as operator commands.

How do jobs get the new libraries

This is a more subtle than I first thought.

From my TSO userid COLIN, I went into Unix and issued a Python command, and the command now worked, so I was picking up the libraries. Round of applause – job done.

I then checked my ISPF session. The command ISRDDN displays the datasets allocated to ISPF. The lnk command, shows what is defined in the lnklst. This showed

B3RES1              >    LINKLIST SYS1.LINKLIB               
B3RES1 > SYS1.MIGLIB
...
B3RES2 > CEE.SCEERUN
...
B3RES2 > CEE.SCEERUN2

so my main ISPF task was unchanged.

I used the command

setprog LNKLST,UPDATE,JOB=COLIN
CSV504I JOB COLIN IS NOW USING THE CURRENT LNKLST SET

to update the COLIN job, and then the ISRDDN LNK gave

B3PRD1              >    LINKLIST FAN140.SEAGLMD        
...
USER08 *SMS > CEE.SCEERUN.NEW
B3USR1 *SMS > CEE.SCEERUN2.NEW

which has update the COLIN job.

Logging on with a different userid, it got the updated definitions.

I tried to logon using SSH, and this failed with messages on the console

BPXP024I BPXAS INITIATOR STARTED ON BEHALF OF JOB SSHD1 RUNNING I
004B
ICH408I USER(OMVSKERN) GROUP(OMVSGRP ) NAME(OMVSKERN )
CSFIQA CL(CSFSERV )
WARNING: INSUFFICIENT AUTHORITY - TEMPORARY ACCESS ALLOWED
FROM CSF* (G)
ACCESS INTENT(READ ) ACCESS ALLOWED(NONE )
ICH420I PROGRAM CEEBINIT FROM LIBRARY CEE.SCEERUN.NEW CAUSED THE
ENVIRONMENT TO BECOME UNCONTROLLED.
BPXP014I ENVIRONMENT MUST BE CONTROLLED FOR DAEMON (BPX.DAEMON)
PROCESSING.
ICH420I PROGRAM CEEBINIT FROM LIBRARY CEE.SCEERUN.NEW CAUSED THE
ENVIRONMENT TO BECOME UNCONTROLLED.
BPXP014I ENVIRONMENT MUST BE CONTROLLED FOR DAEMON (BPX.DAEMON)
PROCESSING.

This is because the CEE.SCEERUN was APF authorised, and CEE.SCEERUN.NEW was not APF authorised.

I dynamically APF authorised them

SETPROG APF,add,DSN=CEE.SCEERUN2.NEW,SMS 
SETPROG APF,add,DSN=CEE.SCEERUN.NEW,SMS

SSH failed the same way.

The easiest thing for me to do on my system was Re-IPL.

Once you’ve done the activate

Thanks to Todd Burch  for reminding me that you need to refresh the Link List Lookaside(LLA).

f lla,refresh

Any new jobs will use the updated LNKLST.

Previously running jobs will continue to use the old LNKLST, unless you tell the the jobs, they can use the new LNKLIST, with the UPDATE option.

 setprog LNKLST,UPDATE,JOB=* 

It is still not that simple…

Todd Birch told me

The process documented by IBM leaves some room for interpretation, unfortunately. After a COPYFROM=CURRENT, I DELETE the old DSNAME and then ADD the new DSNAME. I prefer to use BEFORE or AFTER to put the new DSNAME back in the same place, otherwise, the ADD defaults to the end of the linklist concat.

Also, we use shared dasd, so we tend to use the same dataset names when we delete and re-add. Our process is to IEBCOPY w/replace the old library from our build system (via shared dasd), then on the testing system, stop LLA, define a new LL with COPYFROM=CURRENT, then delete the old DSNAME lib, then re-add the same name using BEFORE or AFTER to keep the concatenation as before**, and then activate and update and then restart LLA.

**it’s typical for us to have both a test library allocated just prior to a production library, akin to

MY.TEST.PRODUCT.LIB
MY.PROD.PRODUCT.LIB

and since we are testing a new test lib, we delete MY.TEST.PRODUCT.LIB and then add it back BEFORE MY.PROD.PRODUCT.LIB.

Whoops

When I tried to use these new libraries I got

ICH420I PROGRAM CEEBINIT FROM LIBRARY CEE.SCEERUN.NEW CAUSED THE
ENVIRONMENT TO BECOME UNCONTROLLED.
BPXP014I ENVIRONMENT MUST BE CONTROLLED FOR DAEMON (BPX.DAEMON)
PROCESSING.

when trying to use SSHD. Because the SSH Daemon is a special program, in that it changes userid to that in the request, it has extra security controls.

It took me a few minutes to understand how to do this.

Background

You need a statement like

RDEFINE PROGRAM CEEBINIT -
ADDMEM('CEE.SCEERUN.NEW'/USER08/NOPADCHK) UACC(READ)

Which sets up the additional security for the program CEEBINIT in the specified library.

However this is not enough, as there are other programs than need to be controlled as well.
You do not use RDEFINE PROGRAM * ADDMEM(‘CEE.SCEERUN.NEW’…. (That would be too simple).

There is (usually) a profile RDEFINE PROGRAM *… define. You have to extend this using

RDALTER PROGRAM *  -
ADDMEM('CEE.SCEERUN.NEW'/USER08/NOPADCHK) UACC(READ)
RDALTER PROGRAM * -
ADDMEM('CEE.SCEERUN2.NEW'/B3USR1/NOPADCHK) UACC(READ)

With the command RLIST PROGRAM * it gave me

CLASS      NAME                                                           
----- ----
PROGRAM *

MEMBER CLASS NAME
------ ----- ----
PMBR

DATA SET NAME VOLSER PADS CHECKING
-------------------------------------------- ------ -------------
BBO401.SBBOLOAD NO
CBC.SCLBDLL NO
CEE.SCEERUN NO
CEE.SCEERUN ****** NO
CEE.SCEERUN2 NO
CEE.SCEERUN2 ****** NO
CEE.SCEERUN2.NEW B3USR1 NO
COLIN.LOAD2 ****** NO
...