The Python interface to RACF is great.

The Python package pysear to work with RACF is great. The source is on github, and the documentation starts here. It is well documented, and there are good examples.

I’ve managed to do a lot of processing with very little of my own code.

One project I’ve been meaning to do for a time is to extract the contents of a RACF database and compare them with a different database and show the differences. IBM provides a batch program, and a very large Rexx exec. This has some bugs and is not very nice to use. There is a Rexx interface, which worked, but I found I was writing a lot of code. Then I found the pysear code.

Background

The data returned for userids (and other types of data) have segments.
You can display the base segment for a user.

tso lu colin

To display the tso base segment

tso lu colin tso

Field names returned by pysear have the segment name as a prefix, for example base:max_incorrect_password_attempts.

My first query

What are the active classes in RACF?

See the example.

from sear import sear
import json
import sys
result = sear(
    {
        "operation": "extract",
        "admin_type": "racf-options"
    },
)
json_data = json.dumps(result.result   , indent=2)
print(json_data)

For error handling see error handling

This produces output like

{
"profile": {
"base": {
"base:active_classes": [
"DATASET",
"USER",...
],
"base:add_creator_to_access_list": true,
...
"base:max_incorrect_password_attempts": 3,

...
}

To process the active classes one at a time you need code like

for ac in result.result["profile"]["base"]["base:active_classes"]:
    print("Active class:",ac)

The returned attributes are called traits. See here for the traits for RACF options. The traits show

Traitbase:max_incorrect_password_attempts
RACF Keyrevoke
Data TypesString
Operators Allowed“set”,”delete”
Supported Operations“alter”,”extract”

For this attribute because it is a single valued object, you can set it or delete it.

You can use this attribute for example

result = sear(
    {
        "operation": "alter",
        "admin_type": "racf-options",
        "traits": {
            "base:max_incorrect_password_attempts": 5,
        },
    },
)

The trait “base:active_classes” is list of classes [“DATASET”, “USER”,…]

The trait is

Traitbase:active_classes
RACF Keyclassact
Data Typesstring
Operators Allowed"add", "remove"
Supported Operations"alter", "extract"

Because it is a list, you can add or remove an element, you do not use set or delete which would replace the whole list.

Some traits, such as use counts, have Operators Allowed of N/A. You can only extract and display the information.

My second query

What are the userids in RACF?

The traits are listed here, and code examples are here.

I used

from sear import sear
import json

# get all userids begining with ZWE
users = sear(
    {
        "operation": "search",
        "admin_type": "user",
        "userid_filter": "ZWE",
    },
)
profiles  = users.result["profiles"]
# Now process each profile in turn.
# because this is for userid profiles we need admin_type=user and userid=....
for profile in profiles:
    user = sear(
       {
          "operation": "extract",
          "admin_type": "user",
          "userid": profile,
       }, 
    )
    segments = user.result["profile"]
    #print("segment",segments)
    for segment in segments:   # eg base or omvs
      for w1,v1 in segments[segment].items():
          #print(w1,v1)
          #for w2,v2 in v1.items():
          #  print(w1,w2,v2 )
          json_data = json.dumps(v1  , indent=2)
          print(w1,json_data)

This gave

==PROFILE=== ZWESIUSR
base:auditor false
base:automatic_dataset_protection false
base:create_date "05/06/20"
base:default_group "ZWEADMIN"
base:group_connections [
  {
    ...
    "base:group_connection_group": "IZUADMIN",
    ...
    "base:group_connection_owner": "IBMUSER",
    ...
},
{
    ...
    "base:group_connection_group": "IZUUSER",
   ...
}
...
omvs:default_shell "/bin/sh"
omvs:home_directory "/apps/zowe/v10/home/zwesiusr"
omvs:uid 990017
===PROFILE=== ZWESVUSR
...

Notes on using search and extract

If you use “operation”: “search” you need a ….._filter. If you use extract you use the data type directly, such as “userid”:…

Processing resources

You can process RACF resources. For example a OPERCMDS provide for MVS.DISPLAY commands.

The sear command need a “class”:…. value, for example

result = sear(
{
"operation": "search",
"admin_type": "resource",
"class": "OPERCMDS",
"resource_filter": "MVS.**",
},
)
result = sear(
{
"operation": "extract",
"admin_type": "resource",
"resource": "MVS.DISPLAY",
"class": "Opercmds",
},
)

The value of the class is converted to upper case.

Changing a profile

If you change a profile, for example to issue the PERMIT command

from sear import sear
import json

result = sear(
    {   "operation": "alter",
        "admin_type": "permission",
        "resource": "MVS.DISPLAY.*",
        "userid": "ADCDG",
        "traits": {
          "base:access": "CONTROL"
        },
        "class": "OPERCMDS"

    },
)
json_data = json.dumps(result.result   , indent=2)
print(json_data)

The output was

{
  "commands": [
    {
      "command": "PERMIT MVS.DISPLAY.* CLASS(OPERCMDS)ACCESS (CONTROL) ID(ADCDG)",
      "messages": [
        "ICH06011I RACLISTED PROFILES FOR OPERCMDS WILL NOT REFLECT THE UPDATE(S) UNTIL A SETROPTS REFRESH IS ISSUED"
      ]
    }
  ],
  "return_codes": {
    "racf_reason_code": 0,
    "racf_return_code": 0,
    "saf_return_code": 0,
    "sear_return_code": 0
  }
}

Error handling

Return codes and errors messages

There are two layers of error handling.

  • Invalid requests – problems detected by pysear
  • Non zero return code from the underlying RACF code.

If pysear detects a problem it returns it in

result.result.get("errors") 

For example you have specified an invalid parameter such as “userzzz“:”MINE”

If you do not have this field, then the request was passed to the RACF service. This returns multiple values. See IRRSMO00 return and reason codes. There will be values for

  • SAF return code
  • RACF return code
  • RACF reason code
  • sear return code.

If the RACF return code is zero then the request was successful.

To make error handling easier – and have one error handling for all requests I used


try:
result = try_sear(search)
except Exception as ex:
print("Exception-Colin Line112:",ex)
quit()

Where try_sear was

def try_sear(data):
# execute the request
result = sear(data)
if result.result.get("errors") != None:
print("Request:",result.request)
print("Error with request:",result.result["errors"])
raise ValueError("errors")
elif (result.result["return_codes"]["racf_reason_code"] != 0):
rcs = result.result["return_codes"]
print("SAF Return code",rcs["saf_return_code"],
"RACF Return code", rcs["racf_return_code"],
"RACF Reason code",["racf_reason_code"],
)
raise ValueError("return codes")
return result

Overall

This interface is very easy to do.
I use it to extract definitions from one RACF database, save them as JSON files. Repeat with a different (historical) RACF database, then compare the two JSON files to see the differences.

Note: The sear command only works with the active database, so I had to make the historical database active, run the commands, and switch back to the current data base.

Python how do I convert a STCK to readable time stamp?

As part of writing a GTF trace formatter in Python I needed to covert a STCK value to a printable value. I could do it in C – but I did not find a Python equivalent.

from datetime import datetime
# Pass in a 8 bytes value
def stck(value):
value = int.from_bytes(value)
t = value/4096 # remove the bottom 12 bits to get value in micro seconds
tsm = (t /1000000 ) - 2208988800 # // number of seconds from Jan 1 1970 as float
ts = datetime.fromtimestamp(tsm) # create the timestamp
print("TS",tsm,ts.isoformat()) # format it

it produced

TS 1735804391.575975 2025-01-02T07:53:11.575975

Python could not read a data set I sent from z/OS USS.

I created a file in Unix System Services, and FTPed it down to my Linux box. I could edit it, and process it with no problems, until I came to read in the file using Python.

Python gave me

File “<frozen codecs>”, line 322, in decode
UnicodeDecodeError: ‘utf-8’ codec can’t decode byte 0xb8 in position 3996: invalid start byte

The Linux command file pagentn.txt gave me

pagentn.txt: ISO-8859 text

whereas other files had ASCII text.

I changed my Python program to have

with open(“/home/colinpaice/python/pagentn.txt”,encoding=”ISO-8859-1″) as file:

and it worked!

I browsed the web, and found a Python way of finding the code page of a file

import chardet    
rawdata = open(infile, 'rb').read()
result = chardet.detect(rawdata)
charenc = result['encoding']

it returned a dict with

result {‘encoding’: ‘ISO-8859-1’, ‘confidence’: 0.73, ‘language’: ”}

Creating a spread sheet from Python to show time spent

I’ve been using xlsxwriter from Python to create data graphs of data, and it is great.

I had problems with adding more data to a graph, and found some aspects of this was not documented. This blog post is to help other people who are trying to do something similar.

I wanted to produce a graph like

The pink colour would normally be transparent. I have coloured it to make the explanation easier.

This shows the OMS team worked from 0900 to 1600 on Monday, and SDC worked from 10am to 12am and for a few minutes on the same day around 1700. I wanted the data to be colour coded, so OMS was brown and SDC was green.

Create the workbook, and the basic chart

spreadSheet = "SPName"
workbook = xlsxwriter.Workbook(spreadSheet+"xlsx")
workbook.set_calc_mode("auto")
summary = workbook.add_worksheet("data")
hhmm = workbook.add_format({'num_format': 'hh:mm'})
# now the basic chart
chart = self.workbook.add_chart({'type': 'bar','subtype':'stacked'})
chart.set_title ({'name': "end week:time spent"})
chart.set_y_axis({'reverse': True})
chart.set_legend({'none': True})
chart.set_x_axis({
'time_axis': True,
'num_format': 'hh:mm',
'min': 0,
'max': 1.0,
'major_unit': 1/12,
'minor_tick_mark': 'inside'
})

chart.set_size({'width': 1070, 'height': 300})
summary.insert_chart('A1', chart)

Data layout

It took a while to understand how the data should be laid out in the table

ABCDEFG
1OStart1ODur 1SStart 1SDur1SInt2SDur2
2OMS Tue9.07.0
3OMS Wed17.02.0
4SDC Wed10.02.07.00.1

Where the data

  • OSTart1 is the time based on hours for OMS
  • ODur1 is the duration for OMS, so on Wed, the time was from 1700 to 1900, and interval of 2.0 hours
  • SStart1 is the start time of the SDC Wed item
  • SDur1 is the duration of the work. The work was from 1000 to 1200
  • SInt2 is the interval from the end of the work to the start of the next work. It is not the start time. It is 1000 + interval of 2 hours + interval of 7 hours, or 1900
  • SDur2 is the duration from the start of the work. It ends at 1000+ 2 hours + 7 hours + 0.1 hours

Define the data to the chart

To get the data displayed properly I used add_series to define the data.

Categories (The labels OMS Tue, OMS Wed, SDC Wed): You have to specify the same categories for all of your data. For me, range A2:A5. Using add_series for the OMS data, and add_series for the SDC data did not display the SDC data labels. This was the key to my problems.

You define the data as columns. The first column is the time from midnight. I have coloured it pink to show you. Normally this would be fill = [{‘none’ : true } ] You use

fill = [{'color': "pink"}] # fill = [{'none': True}]
chart.add_series({
'name': "Series1,
'categories': ["Hours",1,0,4,0],
'values': ["Hours",1,1,4,1],
'fill': fill
})

This specifies categories row 1, column 0 to row 4, column 0, and the column of data row 1, column 1, to row 4, column 1. (Column 0 is column A etc.)

For the second column – the brown, you use

fill = [{'color': "brown"}]
chart.add_series({
'name': "Series2,
'categories': ["Hours",1,0,4,0],
'values': ["Hours",1,2,4,2],
'fill': fill
})

The categories stays the same, the superset of names.

The “values” specifies the column of data row 1, column 2, to row 4, column 2.

Because the data for SDC is missing – this is not displayed.

For the SDC data I used 4 add_series request. The first one

  • name:Series3
  • ‘categories’: [“Hours”,1,0,4,0], the same as for OMS
  • values: row 1,column 3 to row 4 column 3

I then repeated this for columns (and Series) 4,5,6

This gave me the output I wanted.

I used Python lists and used loops to generate the data, so overall the code was fairly compact. The overall result was

Python and mq REST api

I found cURL a good way of using the mq REST API, but I wanted to do more.  cURL depends on a package called libcurl, which can be used by other languages.

Python seemed the next obvious place to look.

As I have found out, using digital certificates for authentication is hard to set up, and using signed certificates is even harder.  As I had done the hard work of setting up the certificates before I tried curl and Python, the curl and Python experience was pretty easy.

I looked at using the Python “request” package.   This allows you to specify most of the parameters that libcurl needs, except it does not allow you to specify the password for the user’s keystore.

I then looked at the Python package pycurl package.    This is a slightly lower level API, but got it working in an hour or so.
My whole program is below.

During the testing I got various errors, such as “77”.  These are documented here. 

The messages were clear, for example

CURLE_SSL_CACERT_BADFILE (77) Problem with reading the SSL CA cert (path? access rights?).

Which was enough to tell me where to look.

All the things you can do with curl, you can do with pycurl.

 

# program - based on code in http://pycurl.io/docs/latest/quickstart.html
import sys
import pycurl

from io import BytesIO

# header_function take from http://pycurl.io/docs/latest/quickstart.html
headers = {}
def header_function(header_line):
# HTTP standard specifies that headers are encoded in iso-8859-1.

header_line = header_line.decode('iso-8859-1')

# Header lines include the first status line (HTTP/1.x ...).
# We are going to ignore all lines that don't have a colon in them.
# This will botch headers that are split on multiple lines...
if ':' not in header_line:
  return

# Break the header line into header name and value.
name, value = header_line.split(':', 1)
print("header",name,value)

home = "/home/colinpaice/ssl/ssl2/"
ca=home+"cacert.pem"
cert=home+"testuser.pem"
key=home+"testuser.key.pem"
cookie=home+"cookie.jar.txt"
url="https://127.0.0.1:9443/ibmmq/rest/v1/admin/qmgr/QMA/queue/CP0000?attributes=type"
buffer = BytesIO()
c = pycurl.Curl()
print("C=",c)
try:
  # see option names here https://curl.haxx.se/libcurl/c/curl_easy_setopt.html
  # PycURL option names are derived from libcurl
  # option names by removing the CURLOPT_ prefix. 
  c.setopt(c.URL, url) 
  c.setopt(c.WRITEDATA, buffer) 
  c.setopt(pycurl.CAINFO, ca) 
  c.setopt(pycurl.CAPATH, "") 
  c.setopt(pycurl.SSLKEY, key) 
  c.setopt(pycurl.SSLCERT, cert) 
  c.setopt(pycurl.SSL_ENABLE_ALPN,1)
  c.setopt(pycurl.HTTP_VERSION,pycurl.CURL_HTTP_VERSION_2_0)
  c.setopt(pycurl.COOKIE,cookie) 
  c.setopt(pycurl.COOKIEJAR,cookie) 
  c.setopt(pycurl.SSLKEYPASSWD , "password") 
  c.setopt(c.HEADERFUNCTION, header_function)  
# c.setopt(c.VERBOSE, True)
  c.perform() 
  c.close()
except Exception as e: 
  print("exception :",e ) 
finally: 
  print("done") 
body = buffer.getvalue() # Body is a byte string. 
# We have to know the encoding in order to print it to a text file 
# such as standard output. 
print(body.decode('iso-8859-1'))

 

Baby Python scripts doing powerful work with MQ

I found PyMqi is an interface from Python to MQ. This is really powerful, and I’m am extending it to be even more amazing!

In this blog post, I give examples of what you can do.

  • Issue PCF commands and get responses back in words rather than internal codes ( so CHANNEL_NAME instead of 3501)
  • Saving the output of DISPLAY commands into files
  • Using these files to compare definitions and highlight differences.
  • Check these files conform to corporate standards.
  • Print out from the command event queue, and the stats event queue etc


I can use some python code to display information via PCF

  • # connect to MQ
  • qmgr = pymqi.connect( queue_manager,”QMACLIENT”,”127.0.0.1(1414)”)
  • # I want to inquire on all SYSTEM.* channels
  • prefix = b”SYSTEM.*”
  • # This PCF request
  • args = {pymqi.CMQCFC.MQCACH_CHANNEL_NAME: prefix}
  • pcf = pymqi.PCFExecute(qmgr)
  • # go execute it
  • response = pcf.MQCMD_INQUIRE_CHANNEL(args)

This is pretty impressive as a C program would take over 1000 lines to do the same!

This comes back with data like

  • 3501: b’SYSTEM.AUTO.RECEIVER’,
  • 1511: 3,
  • 2027: b’2018-08-16 ‘,
  • 2028: b’13.32.15′,
  • 1502: 50

which is cryptic even for experts because you need to know 3501 is the value of the type of data for “CHANNEL_NAME”.

I have some python code which converts this to..

  • ‘CHANNEL_NAME’: ‘SYSTEM.AUTO.RECEIVER’,
  • ‘CHANNEL_TYPE’: ‘RECEIVER’,
  • ‘ALTERATION_DATE’: ‘2018-08-16’,
  • ‘ALTERATION_TIME’: ‘13.32.15’
  • ‘BATCH_SIZE’: 50

for which you only need a kinder garden level of MQ knowledge to understand it. It converts 3501 to CHANNEL_NAME, and 3 into RECEIVER

With a few lines of python I can write this data out so each queue is a file on disk in YAML format.

A yaml file for a queue looks like

  • Q_NAME: TEMP
  • Q_TYPE: LOCAL
  • ACCOUNTING_Q: Q_MGR
  • ALTERATION_DATE: ‘2019-02-03’
  • ALTERATION_TIME: 18.15.52
  • BACKOUT_REQ_Q_NAME: ”

Now it gets exciting! (really)

Now it is in YAML, I can write small Python scripts to do clever things. For example

Compare queue definitions

  • from ruamel.yaml import YAML
  • import sys
  • yaml=YAML()
  • q1 = sys.argv[1] # get the first queue name
  • ignore = [“ALTERATION_DATE”,”ALTERATION_TIME”,
  • “CREATION_DATE”,”CREATION_TIME”]
  • in1 = open(q1, ‘r’) # open the first queue
  • data1 = yaml.load(in1) # and read the contents in
  • for i in range(2,len(sys.argv)): # for all of the passed in filenames
  • q2=sys.argv[i] # get the name of the file
  • in2 = open(q2, ‘r’) # open the file
  • data2 = yaml.load(in2) # read it in
  • for e in data1: # for each parameter in file 1
  • x1 = data1[e] # get the value from file 1
  • x2 = data2[e] # get the value from the other file
  • if not e in ignore: # some parameters we want to ignore
  • if x1 != x2: # if the parameters are different
  • print(q1,q2,”:”,e,x1,”/”,x2) # print out the queuenames, keywork and values

From this it prints out the differences

  • queues/CP0000.yml queues/CP0001.yml : Q_NAME CP0000 / CP0001
  • queues/CP0000.yml queues/CP0001.yml : OPEN_INPUT_COUNT 1 / 0
  • queues/CP0000.yml queues/CP0001.yml : MONITORING_Q Q_MGR / HIGH
  • queues/CP0000.yml queues/CP0001.yml : OPEN_OUTPUT_COUNT 1 / 0
  • queues/CP0000.yml queues/CP0002.yml : Q_NAME CP0000 / CP0002
  • queues/CP0000.yml queues/CP0002.yml : OPEN_INPUT_COUNT 1 / 0
  • queues/CP0000.yml queues/CP0002.yml : OPEN_OUTPUT_COUNT 1 / 0

I thought pretty impressive for 20 lines of code.

and another script -for checking standards

  • from ruamel.yaml import YAML
  • import sys
  • yaml=YAML()
  • q1 = sys.argv[1] # get the queue name
  • # define the variables to check
  • lessthan = {“MAX_Q_DEPTH”:100}
  • ne = {“INHIBIT_PUT”:”PUT_ALLOWED”,”INHIBIT_GET”: “GET_ALLOWED”}
  • in1 = open(q1, ‘r’) # open the first queue
  • data = yaml.load(in1) # and read the contents in
  • # for each element in the LessThan dictionary (MAX_QDEPTH), check with the
  • # data read from the file.
  • # if the data in the file is “lessthan” the value (100)
  • # print print out the name of the queue and the values
  • for i in lessthan: # just MAX_Q_DEPTH in this case
  • if data1[i] < lessthant[i] : print(q1,i,data[i],”Field in error. It should be less than <“,lessthan[i])
  • # if the values are not equal
  • for i in ne: # INHIBUT_PUT and #INHIBIT_GET
  • if data[i] != ne[i] : print(q1,i,data[i],”field is not equal to “,lt[i])

the output is

queues/CP0000.yml
MAX_Q_DEPTH 5000 Field in error. It should be < 100

Display command events

difference Q_NAME CP0000 CP0000 ALTERATION_DATE 2019-02-07 2019-02-11

difference Q_NAME CP0000 CP0000 ALTERATION_TIME 20.48.24 21.29.23

difference Q_NAME CP0000 CP0000 MAX_Q_DEPTH 4000 2000

With my journey so far – Python seems to be a clear winner in providing the infrastructure for managing queue managers.