From data to reports missing the potholes

I’ve been doing work with datasets on z/OS to produce reports. These range from SMF data to DCOLLECT data on datasets and SMS data.

It took a while to “get it right”, because I made some poor decisions as to how to process the data, and some of my processing was much more complex than it needed to be. It was easiest to start again!

I’ve been working with Python and Python tools, and other tools available on the platforms. See Pandas 102, notes on using pandas.

My current environment is to use some Python code to read a record, parse the record into a dictionary(dict), then add the dict to a list of records. Then either pass the list of dicts to Pandas to display, or to externalise the data, and have a second Python program to read the externalised data, and do the Pandas processing.

Reading the data

The data is usually in data sets rather than files in Unix Services. You can copy a dataset to a file, but it is easier to use the python package pyzfile to read datasets directly.

from pyzfile import * 

try:
with ZFile("//'COLIN.DCOLLECT.OUT'", "rb,type=record,noseek") as file:
for rec in file:
#l = len(rec)
yield rec
except ZFileError as e:
print(e,file=sys.stderr)

Often a data source will contain a mixture of record types, for example a dump of SMF datasets, may contain many different record types and subtypes.

You need to consider if you want to process all record types in one pass, or process one record type in one run, and a different record type in a different run.

Processing the data

You will normally have a mapping of the layout of the data in a record. Often there are a mix of records types, you need to decide which record types you process and which you ignore.

Field names

Some of the field names in a record are cryptic, they were created when field names could only be 8 characters or less. For example DCDDSNAM. This stands for DCollect records record type D, field name DS NAMe. You need to decide what you name the field. Do you name it DCDDSNAM, and tell the reader to go and look in the documentation to understand the field names in the report, or do you try to add value and just call it DSN, or DataSetName. You cannot guess some fields, such as DCDVSAMI. This is VSAM Inconsistency.

You also need to consider the printed report. If you have a one character field in the record, and a field name which is 20 characters long – by the default the printed field will be 20 characters long, and so waste space in the report. If the field is rarely used you could call it BF1 for Boring Field 1.

Character strings

Python works in ASCII, and strings need to be in ASCII to be printable. You will need to convert character data from EBCIDC to ASCII.

You can use substring to extract data from a record for example. So to extract a string, and convert it…

DSN =  record[20:63].decode('cp500').strip())

Integers

Integers – you will need to covert this to internal format. I found using the Python Struct very good to use. You give a string of conversion characters (integer, integer, …) and it returns an array of the data. If you are processing the data on a different platform, you may need to worry about big end and little end conversion of numbers.

Strange integers

Some records have units like hundredths of a second. You may want to convert these to float

float_value = float(input_value)/100

Packed numbers

Packed numbers are a representation of a date in “decimal” format. For example a yyyyddd for year 2025, day 5 is 0X2025005F” where the F is a sign digit. You cannot just print it (it comes out as 539295839).

Bit masks

Bit masks are very common, for example there is a 1 byte field DCVFLAG1 with values:

  • DCVUSPVT 0x20 Private
  • DCVUSPUB 0x10 Public
  • DCVUSSTG 0x08 Storage
  • DCVSHRDS 0X04 Device is sharable

If the value of the field is 0x14 – what do you return? I would create a field Flag1 with value of a list[“Public”,”Shareable”]. If all the bits were off, this would return an empty list []. It would be easy to create [“DCVUSPUB”,”DCVSHRDS”] or just display the hex value 14 (or 0x14) – but this makes it hard to interpret the data for the people reading the reports.

Triplets

SMF records contain triplets. These are defined by [offset to start, length of data, count of data] within the record.

For example in the SMF30 record there are many triplet sections. There is one for “usage data” involved in usage based pricing. There can be zero or more sections like

  • Product owner
  • Product name
  • TCB time used in hundredths of a second

How are you going to process this?
The SMF record has 3 fields for usage

  • SMF30UDO Offset to Usage Data section in SMF 30 record
  • SMF30UDL Length of each Usage Data section in SMF 30 record
  • SMF30UDN Number of Usage Data section in SMF 30 record

I would create a variable UsageData = [{“ProdOwner”: …,”ProdName”: …, “TCBTime”: …},{“ProdOwner”: …,”ProdName”: …, “TCBTime”: …},]

and convert TCBTime from an integer representing a hundreds of a second, to a floating point number.

Having these triplets make a challenge when printing the record. You could decide to

  • omit this data
  • summarise the data – and provide only a sum of the TCBTime value
  • give the data as a list of dicts, then have a Pandas step to copy only the fields you need for your reports.

For this usage data, I may want a report showing which jobs used which product, and how my much CPU the job used in that product. Although I may capture the data as a list of products, i could extract the data, and create another data record with

  • jobname1, product1, … CPU used1
  • jobname1, product2, … CPU used2
  • jobname2, product1, … CPU used1
  • jobname2, product3, … CPU used3

and remove the product data from the original data record.

Do you want all of the fields?

You may want to ignore fields, such as reserved values, length of record values, record_ type, and any fields you are not interested in. Record length tends to be the first field, and this is usually not interesting when generating default reports.

How to handle a different length record?

The format of many records change with new releases, typically adding new fields.

You need to be able to handle records from the previous release, where the record is shorter. For example do not add these fields to your dict, or give add them with a “None” value.

Now I’ve got a record – now what?

Once you have got your record, and created a dict from the contents {fieldname1=value, fieldname2=value2…} , you could just add it to the list to be passed to Pandas. It is not always that simple.

I found that some records need post processing before saving.

Calculations

For a DCOLLECT record, there is a field which says

DCVFRESP: Free Space on Volume (in KB when DCVCYLMG is set to 0 or in MB when DCVCLYMG is set to 1)

You need to check bit DCVCYLMG and have logic like

if DCVCYLMG  == 1:
data["FreeSpVolKB"] = data["FreeSpVolKB"] * 1024

Adding or deleting fields

For some fields I did some calculations to simplify the processing. For example I wanted average time when I had total_time, and count.

I created average_time = total_time / count, added this field, and deleted total_time and count fields.

Error handling

I found some records have an error flag, for example “Error calculating volume capacity”. You need to decide what to do.

  • Do you include them, and have the risk, that the calculations/display of volume capacity might be wrong?
  • Do you report record during the collection stage, and not include them in the overall data?

How you accumulate the data, dicts or lists?

When using Pandas you can build each record as a dict of values {“kw1″:”v1″,”kw2″:”v2”}, then build a list of dicts [{}, {}…]

or have “column” have a list of values {“Jobname”: [“job1″,”job2″…],”CPUUsed”:[99,101…] … }. As you process each field you append it to the appropriate “column” field.

# a dict of lists
datad = {"dsn":["ABC","DEF"],
"volser":["SYSRES","USER02"]}
datad["dsn"].append("GHI")
datad["volser"].append("OLDRES")

pdd = pd.DataFrame(datal)


# a list of dicts
dictl = [
{"dsn":"ABC","volser":"SYSRES"},
{"dsn":"DEF","volser":"USER02S"}]
newdict = {"dsn":"GHI","volser":"OLDRES"}

dictl.append(newdict)

pdl = pd.DataFrame.from_records(datal)

I think it is better to capture your data in a dict, then add the dict to the list of records.

For example with

DCVFRESP: Free Space on Volume (in KB when DCVCYLMG is set to 0 or in MB when DCVCLYMG is set to 1)

If you use a dict to collect the data, you can then easily massage the values, before adding the dict to the list.

if DCVCYLMG  == 1:
  data["FreeSpVolKB"] = data["FreeSpVolKB"] * 1024

grand_data.append[data]

If you try to do this using “column” values it gets really messy trying to do a similar calculation.

Using the data

It took a long time to process the dataset and create the Python data. I found it quicker overall to process the dataset once, and externalise the data using Pickle, or JSON. Then have different Python programs which read the data in and processed it. For example

  • Creating a new data structure using just the columns I was interested in.
  • Filtering which rows I wanted.
  • Save it

Leave a comment