Changing TCPIP configuration on z/OS is not easy.

I wanted to change the TCPIP definitions, so I could FTP from my laptop to the zD&T system running the ADCD z/OS images.

On my one-user, self contained system, it was easiest to make the changes to TCPIP, then stop and restart TCPIP.

If course with multiple users this would be disruptive and may not be possible. You can activate changes to your configuration – but it is not trivial.

Where are the definitions?

In the TCPIP job there is a //PROFILE… statement. This was ADCD.Z25A.TCPPARMS(PROF2). In this member is the TCPIP configuration including several “include adcd.Z25A.tcpparms(…)” statements including “include adcd.Z25A.tcpparms(zpdtdev1)” which had the definitions for the connection I needed to change.

Note: TCPIP explicity includes dataset(member) name. If you copy a member from ADCD.Z25A.TCPPARMS to USER.Z25A.TCPPARMS, it will not be used unless you change the configuration to use the fully qualified name.
You may want to copy a member from ADCD.Z25A.TCPPARMS, to keep a copy of the original, then edit the ADCD.Z25A.TCPPARMS to make your changes.

You can use the OBEYFILE command to make configuration changes once TCPIP has started, this can use any dataset. You can put your new definitions in TEST.TCPPARMS, and use OBEYFILE to activate them. Once they are working as expected, copy them to the ADCD.Z25A.TCPPARMS and have them activated when TCPIP starts.

A typical OBEYFILE command is

v tcpip,tcpip,obeyfile,COLIN.TCPPARMS(DELHOME)

Refreshing TCP/IP definitions.

It is not easy to refresh the TCP/IP definitions. Restarting TCP may be an easier solution.

The definitions I wanted to change were in member zpdtdev1

DEVICE PORTA MPCIPA 
 LINK ETH1 IPAQENET PORTA 
HOME &HOMEIPADDRESS1 ETH1 
BEGINRoutes 
; Destination        SubnetMask    FirstHop       LinkName    Size 
ROUTE 9.114.209.0    255.255.255.0    =            ETH1   MTU 1492 
; Destination                      First Hop      LinkName    Size 
ROUTE DEFAULT             &DEFAULTROUTEADDR        ETH1   MTU 1492 
ENDRoutes 
START PORTA 

I needed to change the system symbols &HOMEIPADDRESS1 and &DEFAULTROUTEADDR to be hard coded values.

There is no “replace” command; you have to delete the definitions and re-add them.

From the above configuration file, the obvious statements are

stop PORTA
delete LINK ETH1
delete device PORTA

but this fails with

EZZ0395I DELETE LINK ETH1 ON LINE 2 FAILED BECAUSE LINK STATE NOT VALID
EZZ0395I DELETE DEVICE PORTA ON LINE 3 FAILED BECAUSE DEVICE HAS A LINK DEFINED

The command TSO NETSTAT DEVLINKS gives the status. This gave me

EZZ2760I DevName: PORTA DevType: MPCIPA
EZZ2766I DevStatus: Not Active CfgRouter: Non ActRouter: Unknown
EZZ2761I LnkName: ETH1 LnkType: IPAQENET LnkStatus: Not Active

The EZZ0395I message said

The link is in use. If this message was issued in response to an attempt to delete a link, the link IP address might still be defined. You must delete the link IP address from the HOME list before the link can be deleted. To remove the link IP address from the HOME list, use the VARY TCPIP,,OBEYFILE command with a profile that contains a HOME statement that does not include the home IP address that is associated with the link that you want to delete. If you specify the updated HOME statement and the DELETE LINK statement in the same VARY TCPIP,,OBEYFILE data set, the HOME statement must precede the DELETE LINK statement.

A replace option would seem a better design than the above.

TSO NETSTAT HOME gave me

EZZ2350I MVS TCP/IP NETSTAT CS V2R5       TCPIP Name: TCPIP           16:28:46
EZZ2700I Home address list:
EZZ2701I Address          Link             Flg
EZZ2702I -------          ----             ---
EZZ2703I 172.26.1.2       ETH1             P
EZZ2703I 10.1.1.31        EZASAMEMVS
EZZ2703I 127.0.0.1        LOOPBACK
 
EZZ2704I Address          Interface        Flg
EZZ2704I -------          ---------        ---
EZZ2703I 10.1.1.31        EZAZCX

I copied the key information into a file – excluding the 172.** stuff

HOME  10.1.1.31        EZASAMEMVS 
      127.0.0.1        LOOPBACK 

and used

v tcpip,tcpip,obeyfile,COLIN.TCPPARMS(DELHOME)

this gave messages

EZZ0344I PERMANENT LOOPBACK ADDRESS 127.0.0.1 SPECIFIED ON LINE 2 CANNOT BE ADDED TO THE HOME LIST     
EZZ0612I HOME ADDRESS 10.1.1.31 FOR LINK EZASAMEMVS ON LINE 1 REPLACES THE PREVIOUS ADDRESS            
EZZ0316I PROFILE PROCESSING COMPLETE FOR FILE 'USER.Z25A.TCPPARMS(DELHOME)'                            
EZZ0303I OBEYFILE FILE CONTAINS ERRORS                                                                 
EZZ0331I NO HOME ADDRESS ASSIGNED TO LINK ETH1                                                         
EZZ0619I LINK EZASAMEMVS USES DUPLICATE HOME ADDRESS 10.1.1.31                                         
EZZ0619I LINK IQDIOLNK0A01011F USES DUPLICATE HOME ADDRESS 10.1.1.31                                   
EZZ0059I VARY OBEY COMMAND FAILED: SEE PREVIOUS MESSAGES                                               

which despite the error messages, it seems to have worked as TSO NETSTAT HOME did not show ETH1.

The output from TSO NETSTAT DEVLINKS showed the deletes had worked, and the device PORTA and link ETH1 where no longer present.

I changed the TCPIP definitions and used

V tcpip,tcpip,obeyfile,ADCD.Z25A.TCPPARMS(ZPDTDEV1)

this worked, and TSO NETSTAT HOME gave

MVS TCP/IP NETSTAT CS V2R5       TCPIP Name: TCPIP           16:44:09
Home address list:
Address          Link             Flg
-------          ----             ---
10.1.1.2         ETH1             P
127.0.0.1        LOOPBACK

Address          Interface        Flg
-------          ---------        ---
10.1.1.31        EZAZCX

and ping 10.1.1.2 worked.

As I said at the top – it was was quicker to restart TCPIP.

Changing z/OS system symbols is – easy – ish.

Z/OS provides system wide symbols. This is very useful because you can have configuration with &SYSNAME. within it – and so you can have one definition – and the value depends which system you are on.

The process of changing these symbols without an IPL may be trivial – or not.

What is my system currently using?

You can use the operator command D SYMBOLS . This gives output like

IEA007I STATIC SYSTEM SYMBOL VALUES 607          
 &SYSALVL.          = "2"                        
 &SYSCLONE.         = "1A"                       
 &SYSNAME.          = "S0W1"                     
 &SYSOSLVL.         = "Z1020500"                 

Where do these come from?

The system was IPLed with parm A80GK, this points to a member LOADGK in SYS1.IPLPARM, or the PARMLIB concatenation. This member had

IODF     99 SYS1                                                   
INITSQA  0000M 0008M                                               
SYSCAT   A5SYS1113CCATALOG.Z25A.MASTER                             
SYSPARM  NZ                                                        
IEASYM   (AU,GK)                                                   
NUCLST   00                                                        
PARMLIB  USER.Z25A.PARMLIB                            A5CFG1       
PARMLIB  FEU.Z25A.PARMLIB                             A5CFG1       
PARMLIB  ADCD.Z25A.PARMLIB                            A5SYS1       
PARMLIB  SYS1.PARMLIB                                 A5RES1       
NUCLEUS  1                                                         
SYSPLEX  ADCDPL                                                    

At the start of the IPL it displays

SYS1.IPLPARM ON DEVICE 0A82 SELECTED FOR IPL PARAMETERS    
LOAD   ID GK SELECTED                                      
NUCLST ID 00 SELECTED                                      
IODF DSN = SYS1.IODF99                                     
CONFIGURATION ID = OS390   . IODF DEVICE NUMBER = 0A82     
NUCLEUS 1 SELECTED                                         
IPL DEVICE: 00A80  VOLUME: A5RES1                          
MASTER CATALOG SELECTED IS CATALOG.Z25A.MASTER             
MEMBER IEASYMAU FOUND IN FEU.Z25A.PARMLIB                  
MEMBER IEASYMGK FOUND IN FEU.Z25A.PARMLIB                  

Which matches the LOADGK member.

This shows the symbols came from definitions IEASYMAU and IEASYMGK.

You can use DISPLAY IPLINFO to get

IEE254I  13.34.36 IPLINFO DISPLAY 594                      
 SYSTEM IPLED AT 13.10.56 ON 09/12/2022                    
 RELEASE z/OS 02.05.00    LICENSE = z/OS                   
 USED LOADWS IN SYS1.IPLPARM ON 00A82                      
 ARCHLVL = 2   MTLSHARE = N                                
 IEASYM LIST = 00                                          
 IEASYS LIST = WS (OP)                                     
 IODF DEVICE: ORIGINAL(00A82) CURRENT(00A82)               
 IPL DEVICE: ORIGINAL(00A80) CURRENT(00A80) VOLUME(A5RES1) 

So you can see for this IPL, LOADWS and member IEASYM00 was used.

Changing symbols dynamically.

You can update members in the PARMLIB concatenation, but activating them gets a bit harder.

The DISPLAY PARMLIB command displays the PARMLIB concatenation for example

PARMLIB DATA SETS SPECIFIED                                             
AT IPL                                                                  
ENTRY  FLAGS  VOLUME  DATA SET                                          
  1      S    A5CFG1  USER.Z25A.PARMLIB                                 
  2      S    A5CFG1  FEU.Z25A.PARMLIB                                  
  3      S    A5SYS1  ADCD.Z25A.PARMLIB                                 
  4      S    A5RES1  SYS1.PARMLIB                                      

I copied a member from ADCD.Z25A.PARMLIB, to USER.Z25A.PARMLIB with the same name, and edited that. The next time the member is used, the copy from USER.Z25A.PARMLIB will be used. This means you keep the original unchanged, and only change the copy. You may want to have a LOADxx without the USER…PARMLIB, in case you make a mistake and the IPL fails!

The SETLOAD command can be used to refresh system symbols via a LOADxx member.

  • If you have the LOADxx members in the PARMLIB concatenation, (and so are not using SYS1.IPLPARM) the SETLOAD WS,IEASYM, command will refresh the symbols, defined by LOADWS.
  • If you have the LOADxx members in SYS1.IPLPARM, then you need to use the command like
    • SETLOAD xx,IEASYM,DSN=SYS1.IPLPARM or
    • SETLOAD xx,IEASYM,DSN=SYS1.IPLPARM,VOL=A5SYS1
    • giving the name, and optionally the volume of the dataset containing the LOADxx member.

You can use a different LOADxx to that used at IPL – so you can change your symbols after the IPL has finished.

Note: The entire set of symbols are deleted, and the specified symbols added, so you need to reload all of them. For example if you specified IEASYM (xx,yy) at IPL, using SETLOAD with just IEASYMyy, you will lose the symbols from IEASYMxx.

Running assembler control block chains in C

I needed to extract some information from z/OS in my C program. There is not a callable interface for the data, so I had to chain through z/OS control blocks.

Once you have an example to copy it is pretty easy – it just getting started which is the problem.

I have code (which starts with PSATOLD)

 #define PSA  540 
 char *TCB   = (char*)*(int*)(PSA); 
 char *TIO   = (char*)*(int*)(TCB + 12); 
 char *TIOE  = (char*)(TIO + 24) ; 
                                                            

  • At absolute address 540 (0x21C) is the address of the currently executing TCB.
  • (int *) (PSA) says treat this as an integer (4 byte) pointer.
  • * take the value of what this integer pointer points to. This is the address of the TCB
  • TCB + 12. Offset 12 (0x0c) in the TCB is the address of the Task I/O table (TCB IO)
  • (int *) says treat this as an integer ( 4 byte) pointer
  • * take the value of it to get to the TIOT
  • Offset 24 (0x18) the the location of the first TIO Entry in the control block

When I copied the code originally had char * (long * ) PSA. This worked fine on 31 bit programs but not on a 64 bit program as it uses 64 bit as an address – not 32 ! I had to use “int” to get it to work.

Another example, which prints the CPU TCB and SRB time used by each address space, is

// CVT Main anchor for many system wide control blocks
#define FLTCVT     16L
//  The first Address Space Control Block
#define CVTASCBH  564L
// the chain of ASCBs - next
#define ASCBFWDP    4L
//  offset to job info
#define ASCBEJST   64L
// the ASID of this address space
#define ASCBASID   36L


__int64 lTCB, lSRB; // could have used long long 
short ASID;  // 0x0000
char *plStor = (char*)FLTCVT;
char *plCVT  = (char*)*(int*)plStor;
char *plASCB = (char*)*(int*)(plCVT+CVTASCBH); // first ASCB
for( i=0; i<1000 & plASCB != NULL;
       i++, plASCB = (char*)*(int*)(plASCB+ASCBFWDP) )
{
  lTCB = *(__int64*)(plASCB+ASCBEJST) >> 12; // microseconds
  lSRB = *(__int64*)(plASCB+ASCBSRBT) >> 12; // microseconds
  ASID = *(short*)(plASCB+ASCBASID));
  printf("ASID=%4.4x TCB=%lld; SRB=%lld\n", ASID, lTCB, lSRB);
}

Creating a C external function for Python, an easier way to compile

I wrote about my first steps in creating a C extension in Python. Now I’ve got more experience, I’ve found an easier way of compiling the program and creating a load module. It is not the official way – but it works, and is easier to do!

The traditional way of building a package is to use the setup.py technique. I’ve found just compiling it works just as well (and is slighly faster). You still need the setup.py for building Python source.

I set up a cp4.sh file

name=zos
pythonSide='/usr/lpp/IBM/cyp/v3r8/pyz/lib/python3.8/config-3.8/libpython3.8.x'
export _C89_CCMODE=1
p1=" -DNDEBUG -O3 -qarch=10 -qlanglvl=extc99 -q64"
p2="-Wc,DLL -D_XOPEN_SOURCE_EXTENDED -D_POSIX_THREADS"
p2="-D_XOPEN_SOURCE_EXTENDED -D_POSIX_THREADS"
p3="-D_OPEN_SYS_FILE_EXT -qstrict "
p4="-Wa,asa,goff -qgonumber -qenum=int"
p5="-I//'COLIN.MQ930.SCSQC370' -I. -I/u/tmp/zpymqi/env/include"
p6="-I/usr/lpp/IBM/cyp/v3r8/pyz/include/python3.8"
p7="-Wc,ASM,EXPMAC,SHOWINC,ASMLIB(//'SYS1.MACLIB'),NOINFO "
p8="-Wc,LIST(c.lst),SOURCE,NOWARN64,FLAG(W),XREF,AGG -Wa,LIST,RENT"
/bin/xlc $p1 $p2 $p3 $p4 $p5 $p6 $p7 $p8 -c $name.c -o $name.o -qexportall -qagg -qascii
l1="-Wl,LIST=ALL,MAP,XREF -q64"
l1="-Wl,LIST=ALL,MAP,DLL,XREF -q64"
/bin/xlc $name.o $pythonSide -o $name.so $l1 1>a 2>b
oedit a
oedit b

This shell script creates a zos.so load module in the current directory.

You need to copy the output load module (zos.so) to a directory on the PythonPath environment variable.

What do the parameters mean?

Many of the parameters I blindly copied from the setup.py script.

  • name=zos
    • This parametrizes the script, for example $name.c $name.o $name.so
  • pythonSide=’/usr/lpp/IBM/cyp/v3r8/pyz/lib/python3.8/config-3.8/libpython3.8.x’
    • This is where the python side deck, for resolving links the to functions in the Python code
  • export _C89_CCMODE=1
    • This is needed to prevent the message “FSUM3008 Specify a file with the correct suffix (.c, .i, .s,.o, .x, .p, .I, or .a), or a corresponding data set name, instead of -o./zos.so.”
  • p1=” -DNDEBUG -O3 -qarch=10 -qlanglvl=extc99 -q64″
    • -O3 optimization level
    • -qarch=10 is the architectural level of the code to be produced.
    • –qlanglvl=extc99 says use the C extensions defined in level 99. (For example defining variables in the middle of a program, rather that only at the top)
    • -q64 says make this a 64 bit program
  • p2=”-D_XOPEN_SOURCE_EXTENDED -D_POSIX_THREADS”
    • The C #defines to preset
  • p3=”-D_OPEN_SYS_FILE_EXT -qstrict ”
    • -qstrict Used to prevent optimizations from re-ordering instructions that could introduce rounding errors.
  • p4=”-Wa,asa,goff -qgonumber -qenum=int”
    • -Wa,asa,goff options for any assembler compiles (not used)
    • -qgonumber include C program line numbers in any dumps etc
    • -qenum=int use integer variables for enums
  • p5=”-I//’COLIN.MQ930.SCSQC370′ -I. -I/u/tmp/zpymqi/env/include”
    • Where to find #includes:
    • the MQ libraries,
    • the current working directory
    • the header files for my component
  • p6=”-I/usr/lpp/IBM/cyp/v3r8/pyz/include/python3.8″
    • Where to find #includes
  • p7=”-Wc,ASM,EXPMAC,SHOWINC,ASMLIB(//’SYS1.MACLIB’),NOINFO ”
    • Support the use of __ASM().. to use inline assembler code.
    • Expand macros to show what is generated
    • List the data from #includes
    • If using__ASM__(…) where to find assembler copy files and macros.
    • Do not report infomation messages
  • p8=”-Wc,LIST(c.lst),SOURCE,NOWARN64,FLAG(W),XREF,AGG -Wa,LIST,RENT”
    • For C compiles, produce a listing in c.lst,
    • include the C source
    • do not warn about problems with 64 bit/31 bit
    • display the cross references (where used)
    • display information about structures
    • For Assembler programs generate a list, and make it reentrant
  • /bin/xlc $p1 $p2 $p3 $p4 $p5 $p6 $p7 $p8 -c $name.c -o $name.o -qexportall
    • Compile $name.c into $name.o ( so zos.c into zos.o) export all entry points for DLL processing
  • L1=”-Wl,LIST=ALL,MAP,DLL,XREF -q64″
    • bind pararameters -Wl, produce a report,
    • show the map of the module
    • show the cross reference
    • it is a 64 bit object
  • /bin/xlc $name.o $pythonSide -o $name.so $L1 1>a 2>b
    • take the zos.o, the Python side deck and bind them into the zos.so
    • pass the parameters defined in L1
    • output the cross reference to a and errors to b
  • oedit a
    • This will have the map, cross reference and other output from the bind
  • oedit b
    • This will have any error messages – it should be empty

Notes:

  • -qarch=10 is the default
  • the -Wa are for when compiling assembler source eg xxxx.s
  • –qlanglvl=extc99. EXTENDED may be better than extc99.
  • it needs the -qascii to work with Python.

Python classes, objects, external functions and cleaning up.

I’ve been working in some code to be able to use z/OS datasets, and DD statements. It took me a while to understand how some bits of Python work.

I also did things such as open a file, allocate a 1MB buffer, and wondered how to close the file, and release the buffer to prevent a storage leak.

The Python import

The Python import makes external functions and classes available to a program. The syntax is like

import abc as xyz

x = xyz…..

abc can be

  • a file abc.py
  • a directory abc
  • a load module abc.so

I’ll focus on the load module.

The abc.so load module

This can define a function based approach, so you would use it like

fileHandle = zos.fopen(“colin.c”,”rb”)
data = zos.fread(fileHandle)
zos.fclose(fileHandle)

You can provide many functions. Some may return a “handle” object, such as fileHandle which is passed to other functions.

It can also be object based and the C load module external function creates a new type.

file = zos.fopen(“colin.c”,”rb”)
data = file.fread()
file.close()

The functions are associated with the object “file”, rather than the load module zos.

Internally the object is passed to the function.

Cleaning up

Within my code I had fileHandle = fopen(“datasetname”….), which allocated a 1MB buffer for the read function.

I also had fclose(fileHandle) where I closed the file and freed the buffer.

However I could also do

fileHandle = fopen(“datasetname1″….)
fileHandle = fopen(“datasetname2″….)
fileHandle = fopen(“datasetname3″….)
fclose(fileHandle)

with no intermediate fclose(), which would lead to a storage leak as the file was fclose routine was not being called.

Using a class to call a function at exit

If you have a Python class for your data you can use

def cb(self,a,b):
     self.handle =  zconsole.acb(a,b)
     atexit.register(self.clean_up,self.handle)

def clean_up(self,handle):
    if handle != None:
        zconsole.cancel(self.handle)

When function cb is used, it registers with the “at exit” routine atexit, and says, “at exit” call my routine “clean_up”, and pass the handle.

At shutdown the clean_up routine is called once for every instance, and gives the cancel code a chance to clean up.

Using a C external function and “functions”.

Within the external functions C code, is PyModuleDef which defines the module to Python.

As such there is no way to automatically get your clean up function to be called (and free my 1MB buffer).

However you can exploit the Python module state data. For example

struct {
myparm * ...
...
} myStatic;

static struct PyModuleDef zos_module = {
  PyModuleDef_HEAD_INIT,
  "zos",
  zos_doc,
  sizeof(myStatic),
  zos_methods, // the functions (methods)
  NULL, // Multi phase init. NULL -> single
  NULL, // Garbage collection traversal
  zos_clear, // Garbage collection clear
  zos_free // Garbage collection free
};

The block of state data is allocated for you, and you can issue the PyModule_GetState(PythonModule) function to get access this block.

You chain could chain your data from the state data, perhaps in a linked list.

When the clean up occurs, your “zos_free” routine will be called, and you can free all the storage you allocated and clean up.

For example

PyMODINIT_FUNC PyInit_zos(void) { 
  PyObject *d; 
                                                                                        
  /* Create the module  */ 
  mzos = PyModule_Create(&zos_module); 
  // get the state data and initialise it
  state * pState = (state * )  PyModule_GetState(mzos); 
  memcpy(pState -> eyec,"state   ",8);
  ... 
                                  
  PyDict_SetItemString(d, "__doc__", Py23Text_FromString(zos_doc)); 
  PyDict_SetItemString(d,"__version__", Py23Text_FromString(__version__)); 
                                                                                        
return mzos;

Using a C external function and “objects” or types.

With a “function based” function, you have Python code like

fileHandle = zos.fopen("myfilename"....)
data = zos.fread(fileHande)
...

With “object based” functions you have Python code like

file = zos.fopen("colin.c","rb")
data = file.fread()
file.close()

In this case the object is a Python type. There is a good description here.

As with function based code you define the attributes of the object, including the tp_dealloc function. This gets control when the object is deallocated. In the Custom_dealloc, function you can close the file and free the buffer etc.

static PyTypeObject CustomType = {
    PyVarObject_HEAD_INIT(NULL, 0)
    .tp_name = "custom.Custom",
    .tp_doc = PyDoc_STR("Custom objects"),
    .tp_basicsize = sizeof(CustomObject),
    .tp_itemsize = 0,
    .tp_dealloc = (destructor) Custom_dealloc,
    .tp_flags = Py_TPFLAGS_DEFAULT,
    .tp_new = PyType_GenericNew,
};

static void
Custom_dealloc(CustomObject *self)
{
   ... // put your code here
}

static PyModuleDef custommodule = {
    PyModuleDef_HEAD_INIT,
    .m_name = "custom",
    .m_doc = "Example module that creates an extension type.",
    .m_size = -1,
};

PyMODINIT_FUNC
PyInit_custom(void)
{
    PyObject *m;

    m = PyModule_Create(&custommodule);
    if (m == NULL)
        return NULL;
    Py_INCREF(&CustomType);
    if (PyModule_AddObject(m, "Custom", (PyObject *) &CustomType) <  0) {
        Py_DECREF(&CustomType);
        Py_DECREF(m);
        return NULL;
    }
    return m;
}

Note: The list of available .tp… definitions is available here.

Python import, packages and modules.

I’ve been building various Python packages (for example pymqi for z/OS, and accessing z/OS datasets from Python). It took me a while to understand how Python import works, for example why I needed two packages, one for my load modules, and one for the Python code.

There is a lot of good documentation but I felt it was missing the end user’s view who was starting to work in this area.

The import statement

The Python import makes external functions and classes available to a program. The syntax is like

import abc as xyz

x = zyx…..

abc can be

  • a file abc.py
  • a directory abc
  • a load module abc.so

They do the same thing, but differently

The abc.py file

This Python source file can have a class (for objects) or functions in the file. It can import other files.

The abc.pyc file

This is a compiled Python file (from abc.py).

The abc.so load module

The load module is generated from C source.

This can defined a function based approach, so you would use it like

fileHandle = zos.fopen(“colin.c”,”rb”)
data = zos.fread(fileHandle)
zos.fclose(fileHandle)

You can provide many functions. Some functions may return a “handle” object which is passed to other functions.

It can also be object based and the C code creates a new type.

hFile = zos.fopen(“colin.c”,”rb”)
data = hFile.fread()
hFile.fclose()

The function calls are attached to the object (hFile) – rather than the load module zos.

Internally the object is passed to the function.

The abc directory with __init__.py

This is known as a “regular” module package.

It has the __init__.py file, and can have other files and subdirectories.

The __init__.py is run when the package is first imported, so this can import other packages and do other initialisation.

The abc directory without __init__.py

This is the follow-on to regular module package, known as a “namespace” package. It feels a bit strange, and I guess most people do not need to know about it.

I’ll give the concept view here, and give an expanded description below.

For example you have a couple of directories

  • /u/mystuff/xyz/abc.py
  • /u/mystuff/xyz/a2.py
  • /usr/myprod/xyz/hij.pj
  • /usr/myprod/xyz/klm.pj

and when the PythonPath has both directories in it, you can use

import xyz
from xyz import abc, klm

which selects the directories in the PythonPath and imports from these.

Packages

The documentation says …

Python defines two types of packages, regular packages and namespace packages. Regular packages are traditional packages as they existed in Python 3.2 and earlier. A regular package is typically implemented as a directory containing an __init__.py file. When a regular package is imported, this __init__.py file is implicitly executed, and the objects it defines are bound to names in the package’s namespace. The __init__.py file can contain the same Python code that any other module can contain, and Python will add some additional attributes to the module when it is imported.

A Namespace package is a composite of various portions, where each portion contributes a sub-package to the parent package. Portions may reside in different locations on the file system. Portions may also be found in zip files, or where-ever else that Python searches during import. Namespace packages may or may not correspond directly to objects on the file system; they may be virtual modules that have no concrete representation.

My view as to how they work is

Regular packages

You have PYTHONPATH pointing to a list of directories.

You want to import foo.

  • For each directory on PYTHONPATH
    • If <directory>/foo/__init__.py is found, return the regular package foo
    • If <directory>/foo.{py,pyc,so,pyd} is found, return the regular package foo

If this returns with a package then import the package.

Namespace package

You have PYTHONPATH pointing to a list of directories.

You want to import foo.

  • dirList = “”
  • For each directory on PYTHONPATH
    • If <directory>/foo/__init__.py is found, return the regular package foo
    • If <directory>/foo.{py,pyc,so,pyd} is found, return the regular package foo
    • If “<directory>/foo/” is a directory then dirList += “<directory>/foo/

If no package was returned, and dirList is not empty then we have a namespace package.

This can be used as follows

from foo import abc

has logic like

  • for d in dirlist:
    • if d/”abc.*” exists then return d/”abc….”

This has the advantage that you can work on a sub component.

If you have PYTHONPATH = /u/colin;/usr/python, and there is a file /u/colin/foo/abc.py, the statement from foo import abc, xyz imports /u/colin/foo/abc and /usr/python/foo/xyz.py

When is an error not an err?

If you step off the golden path of trying to read a file – you can quickly end up in in trouble and the diagnostics do not help.

I had some simple code

FILE * hFile = fopen(...); 
recSize = fread(pBuffer ,1,bSize,hFile); 
if (recSize == 0)
{
  // into the bog!
 if (feof(hFile))printf("end of file\n");
 else if (ferror(hFile)) printf("ferror(hFile) occurred\n");
 else printf("Cannot occur condition\n");
 
}

When running a unit test of the error path of passing a bad file handle, I got the “cannot occur condition because the ferror() returned “OK – no problem “

The ferror() description is

General description: Tests for an error in reading from or writing to the specified stream. If an error occurs, the error indicator for the stream remains set until you close the stream, call rewind(), or call clearerr().
If a non-valid parameter is given to an I/O function, z/OS XL C/C++ does not turn the error flag on. This case differs from one where parameters are not valid in context with one another.

This gave me 0, so it was not able to detect my error. ( So what is the point of ferror()?)

If I looked at errno and used perror() I got

errno 113
EDC5113I Bad file descriptor. (errno2=0xC0220001)

You may think that I need to ignore ferror() and check errno != 0 instead. Good guess, but it may not be that simple.

The __errno2 (or errnojr – errno junior)) description is

General description: The __errno2() function can be used when diagnosing application problems. This function enables z/OS XL C/C++ application programs to access additional diagnostic information, errno2 (errnojr), associated with errno. The errno2 may be set by the z/OS XL C/C++ runtime library, z/OS UNIX callable services or other callable services. The errno2 is intended for diagnostic display purposes only and it is not a programming interface. The __errno2() function is not portable.
Note: Not all functions set errno2 when errno is set. In the cases where errno2 is not set, the __errno2() function may return a residual value. You may use the __err2ad() function to clear errno2 to reduce the possibility of a residual value being

If you are going to use __errno2 you should clear it using __err2ad() before invoking a function that may set it.

I could not find if errno is clean or if it may return a residual value, so to be sure to set it before every use of a C run time library function.

Having got your errno value what do you do with it?

There are #define constants in errono.h such as

#define EIO 122 /* Input/output error */

You case use if ( errno == EIO ) …

Like many products there is no mapping of 122 to “EIO”, but you can use strerror(errno) to map the errno to the error string like EDC5113I Bad file descriptor. (errno2=0xC0220001). This also provides the errno2 string value.

Using ASCII stuff in a C program

This is another topic which looks simple of the surface but has hidden depths. (Where “hidden depths” is a euphemism for “looks buggy”). It started off as one page, but by the time I had discovered the unexpected behaviours, it became 10 times the size. The decision tree to say if text will be printed in ASCII or EBCDIC was three levels deep – but I give a solution.

Why do you want to use ASCII stuff in a C program?

Java, Python and Node.js run on z/OS, and they use ASCII for the character strings, so if you are writing JNI interfaces for Java, or C external functions for Python you are likely to use this.

Topics covered in the log post

The high level view

You can define ASCII character strings in C using:

char * pA = 0x41424c30 ;  // ABC0
// or 
#pragma convert(819)
char * pASCII = "CODEPAGE 819 data" ;
#pragma convert(pop)

And EBCDIC strings (for example when using ASCII compile option)

#pragma convert("IBM-1047") 
char * pEBCDIC = "CODEPAGE 1047  data" ; 
#pragma convert(pop) 

You can define that the whole program is “ASCII” by using the -Wc,ASCII or -qascii options at compile time. This also gives you printf and other functions.

You can use requests like fopen(“myfile”…) and the code successfully handles the file name in ASCII. Under the covers I expect the code does __a2e_l() to covert from ASCII to EBCDIC, then uses the EBCDIC version of fopen().

The executable code is essentially the same – the character strings are in ASCII, and the “Flags” data is different (to reflect the different compile options). At bind time different stubs are included.

To get my program compiled with the -qascii option, to successfully print to the OMVS terminal, piped using |, or redirect using 1>, I had to using the C run time function fcntl() to set the code page and tag of STDOUT and STDERR. See here below.

You need to set both STDERR and STDOUT – as perror() prints to STDERR, and you may need this if you have errors in the C run time functions.

Some of the hidden depths!

I had a simple “Hello World” program which I compiled in USS with the -qascii option. Depending on how I ran it, I got “Hello World” or “çÁ%%?-ï?Ê%À” (Hello World in ASCII).

  • straight “./prog”. The output in ASCII
  • pipe “./prog | cat”. If I used the environment variable _TAG_REDIR_OUT=”TXT” the output was in EBCDIC – otherwise it came out in ASCII.
  • redirect to a file “./prog 1> aaa”. Depending on the environment variable _BPXK_AUTOCVT=”ON”, and if the file existed or nor, and if the files existed and was tagged. The output could be in EBCDIC or ASCII!

So all in all – it looks a bit of a mess.

Background to outputting data

Initially I found printing of ASCII data was easy; then I found what I had written sometimes did not work, and after a week or so I had a clearer idea of what is involved, then I found a few areas which were even more complex. You may want to read this section once, read the rest of the blog post, then come back to this section.

A file can have

  • a tag – this is typically “this file is binary|text|unknown”
  • the code page of the data – or not specified.
  • you can use ls -T file.name to display the tag and code page of a file.

Knowing that information…

  • If a file has a tag=text and an ASCII code page,
    • if the _BPXK_AUTOCVT=”ON” environment flag is set it will display in EBCDIC (eg cat…)
    • else (_BPXK_AUTOCVT=”OFF”) it will display unreadable ASCII (eg cat…)
  • If a file has a tag=binary then converting to a different code page makes no sense. For example a .jpeg or load module. Converting a load module would change the instructions Store Half Word(x40) to a Load Positive (x20).
  • If a file is not tagged – it becomes a very fuzzy area. The output is dependant on the program running. An ASCII program would output as ASCII, and an EBCDIC would output as EBCDIC.
  • ISPF edit is smart enough to detect the file is ASCII – and enable ISPF edit command “Source ASCII”.

Other strands to the complexity

  • Terminal type
    • If you are logged on using rlogin, you can use chcp to change the tag or code page of your terminal.
    • If you are logged in through TSO – using OMVS, you cannot use chcp. I can’t find a command to set the tag or code page, but you can set it programmatically.
  • Redirection
    • You can print to the terminal or redirect the output to a file for example ./runHello 1>output.file .
    • The file can be created with the appropriate tag and code page.
    • You can use the environment variables _TAG_REDIR_OUT=TXT|BIN and _TAG_REDIR_ERR=TXT|BIN to specify what the redirected output will be.
    • If you use _TAG_REDIR_OUT=TXT, the output is in EBCDIC.
      • you can use ./prog | cat to take the output of prog and pipe it through cat to display the lines in EBCDIC.
      • you can use ./prog |grep ale etc. For me this displayed the EBCDIC line with Locale in it.
    • You can have the situation where if you print to the terminal it is in ASCII, if you redirect, it looks like EBCDIC!
  • The tag and ccsid are set on the first write to the file. If you write some ASCII data, then set the tag and code page, the results are unpredictable. You should set the tag and ccsid before you try to print anything.

What is printed?

A rough set of rules to determine what gets printed are

  • If the output file has a non zero code page, and _BPXK_AUTOCVT=”ON” is set, the output the data converted from the code page to EBCDIC.
  • If the output file has a zero code page, then use the code page from the program. A program complied with ASCII will have an ASCII code page, else it will default to 1047.

A program can set the tag, and code page using the fcntl() function.

Once the tag and code page for STDOUT and STDERR have been set, they remain set until reset. This leads to the confusing sequence of commands and output

  • run my ascii compiled “hello world” program – the data comes out as ASCII.
  • run Python3 –version which sets the STDOUT code page.
  • rerun my ascii compiled “hello world” program – the data comes out as EBCDIC! This is because the STDOUT code page was set by Python (and not reset).

I could not find a field to tell me which code page should be used for STDOUT and STDERR, so I suggest using 1047 unless you know any better.

Using fcntl() to display and set the code page of the output STDOUT and STDERR

To display the information for an open file you can use the fcntl() function. The stat() function also provide information on the open file. They may not be entirely consistent, when using pipe or redirect.

#include <fcntl.h> 
...
struct f_cnvrt f; 
f.cvtcmd = QUERYCVT; 
f.pccsid = 0; 
f.fccsid = 0; 
int action = F_CONTROL_CVT; 
rc =fcntl( STDOUT_FILENO,action, &f ); 
...
switch(f.cvtcmd) 
{ 
  case QUERYCVT : printf("QUERYCVT - Query"); break; 
  case SETCVTOFF: printf("SETCVTOFF -Set off"); break; 
  case SETCVTON : printf("SETCVTON -Set on unconditionally"); break; 
  case SETAUTOCVTON : printf("SETAUTOCVTON - Set on conditionally"); break; 
  default:  printf("cvtcmd %d",f.cvtcmd); 
} 
printf(" pccsid=%hd fccsid=%hd\n",f.pccsid,f.fccsid); 

For my program compiled in EBCDIC the output was

SETCVTOFF -Set off pccsid=1047 fccsid=0

This shows the program was compiled as EBCDIC (1047), and the STDOUT ccsid was not set.

When the ascii compiled version was used, the output, in ASCII was

SETCVTON -Set on unconditionally pccsid=819 fccsid=819

This says the program was ASCII (819) and the STDOUT code page was ASCII (819).

When I reran the ebcdic complied version, the output was

SETCVTOFF -Set off pccsid=1047 fccsid=0

Setting the code page

printf("before text\n");
f.cvtcmd = SETCVTON ; 
f.fccsid = 0x0417  ;  // code page 1047
f.pccsid = 0x0000  ;  // program 333 = ascii, 0 take default 
rc =fcntl( STDOUT_FILENO,action, &f ); 
if ( rc != 0) perror("fcntl"); 
printf("After fcntl\n"); 

The first time this ran as an ASCII program, the “before text” was not readable, but the “After fcntl” was readable. The second time it was readable. For example, the second time:

SETCVTON -Set on unconditionally pccsid=819 fccsid=1047
before text
After fcntl

You may want to put some logic in your program

  • use fcntl and f.cvtcmd = QUERYCVT for STDOUT and STDERR
  • if fccsid == 0 then set it to 1047, using fcntl and f.cvtcmd = SETCVTON

Set the file tag

This is needed to set the file tag for use when output is piped or redirected.

void settag(int fileNumber) 
{ 
  int rc; 
  int action; 
  struct file_tag ft; 
  memset(&ft,sizeof(ft),0); 
  ft.ft_ccsid = 0x0417; 
  ft.ft_txtflag = 1; 
  ft.ft_deferred = 0;  // if on then use the ccsid from the program! 
  action = F_SETTAG; 
  rc =fcntl( fileNumber,action, &ft); 
  if ( rc < 0) 
  { 
     perror("fctl f_settag"); 
     printf("F_SETTAG %i %d\n",rc,errno); 
  } 
}

What gets printed (without the fcntl() code)?

It depends on

  • if you are printing to the terminal – or redirecting the ouptut.
  • if the STDOUT fccsid is set to 1047
  • If _BPXK_AUTOCVT=”ON”

Single program case normal program

If you have a “normal” C program and some EBCDIC character data, when you print it, you get output like “Hello World”.

Single program case ASCII program or data – fccsid = 0

If you have a C program, compiled with the ASCII option, and print some ASCII data, when you print it you get output like ø/ËËÁÀ-À/È/-

Single program case ASCII program or data – fccsid = 1047

If you have a C program, compiled with the ASCII option, and print some ASCII data, when you print it you get output output like “Hello World”.

Program with both ASCII and EBCDIC data – compiled with ASCII

With a program with

#pragma convert("IBM-1047") 
 char * pEBCDIC ="CODEPAGE 1047  data"  ; 
#pragma convert(pop) 
printf("EBCDIC:%s\n",pEBCDIC); 

#pragma convert(819) 
char * pASCII = "CODEPAGE 819 data" ; 
#pragma convert(pop) 
printf("ASCII:%s\n",pASCII); 

When compiled without ascii it produces

EBCDIC:CODEPAGE 1047 data
ASCII:ä|àá& åá—–À/È/

When compiled with ascii it produces

EBCDIC:ÃÖÄÅ×ÁÇÅ@ñðô÷@@–£-
ASCII:CODEPAGE 819 data

Mixing programs compiled with/without the ASCII options

If you have a main program and function which have been compiled the same way – either both with ASCII compile option, or both without, and pass a string to the function to print, the output will be readable “Hello World”.

If the two programs have been compiled one with, and one without the ASCII options, an ASCII program will try to print the EBCDIC data, and an EBCDIC program will try to print the ASCII data.

If you know you are getting the “wrong” code page data, you can use the C run time functions __e2a_l or __a2e_l to convert the data.

Decoding/displaying the ASCII output

If you pipe the output to a file, for example ./cp2.so 1>a you can display the contents of the file converted from ASCII using

  • the “source ascii” command in ISPF edit, or
  • oedit . , and use the “ea” (edit ascii) or “/” line command

The “source ascii” works with SDSF output, with the SE line command.

Several times I had a file with EBCDIC data but the file had been tagged as ASCII. I used chtag -tc IBM-1047 file.name to reset it.

Under the covers

At compile time

There is a predefined macro __CHARSET_LIB which is is defined to 1 when the ASCII compiler option is in effect and to 0 when the ASCII compile option is not used.

There is C macro logic like defines which function to use

if (__CHARSET_LIB == 1) 
#pragma map (printf, "@@A00118")
else 
#pragma map (printf, "printf")

A program compiled with the ASCII option will use a different C run time module (@@A00118), compared with the non ASCII mode, which will use printf.

The #pragma map “renames” the function as input to the binder.

This is known as bimodal and is described in the documentation.

I want my program to be bimodal.

Bimodal is where your program can detect if it is running as ASCII or EBCDIC and make the appropriate decision. This can happen in you have code which is #included into the end user’s program.

This is documented here.

You can use

#define _AE_BIMODAL 1 
if (__isASCII()) // detect the run time
   __printf_a(format, string);  // use the ascii version
else
   __printf_e(format, string); // use the ebcdic version

fopen trace etc is not so useful

If you can specify an environment variable you can trace the C file operations.

This did not give much useful information, as it did not give the name of the file being processed, and I could not trace the file which was causing fopen problems, so overall a good idea – but a poor implementation.

How to set it up

See File I/O trace, Locating the file I/O trace and the environment variable _EDC_IO_TRACE

For example

export _EDC_IO_TRACE="(*,2,1M)"

Where filter is

Filter Indicates which files to trace.

  • //DD:filter Trace will include the DD names matching the specified filter string.
  • //filter Trace will include the MVS™ data sets matching the specified filter string. Member names of partitioned data sets cannot be matched without the use of a wildcard. filter Trace will include the Unix files matching the specified filter string.
  • //DD:* Trace will include all DD names.
  • //* Trace will include all MVS data sets. This is the default setting.
  • /* Trace will include all Unix files.
  • * Trace will include all MVS data sets and Unix files.

Detail – use 2.

Buffer size such as 1M or 50K .

The output goes to a file such as /tmp, but you can change this with

export _CEE_DMPTARG=”.”

This worked for me … but initially I could not read the output file. (It may because it came from Python which has been compiled with ASCII option.

The command ls -ltrT showed the file was tagged in ASCII, so I used

chtag -r EDC*

to reset it, and I could edit the file.

Sample output

Trace details for ((POSIX)):
        Trace detail level:  2 
        Trace buffer size:   1024K                                                                  
        fdopen(10,r)                                                                 
        fldata: 
            __recfmF:1........ 0            __dsorgVSAM:1..... 0 
            __recfmV:1........ 0            __dsorgHFS:1...... 1 
            __recfmU:1........ 1            __openmode:2...... 1 
...

Which is not very helpful as it does not tell you the file that has been opened!

When I traced a Python program, I only got information on 5 files – instead of the hundreds I was expecting.

Various abends and problems

I’ll list them here for search engines to find.

CEE3250C The system or user abend U4000 R=00007017 was issued.

U4000

  • Explanation: The assembler user exit could have forced an abend for an unhandled condition. These are user-specified abend codes.
  • System action:Task terminated.
  • Programmer response:
  • Check the Language Environment message file for message output. This will tell you what the original abend was.

There were no other messages. BPXBATCH ended with return code 2304 which means a kill -9 was issued.

If I remove the _EDC_IO_TRACE it works.

I also got a file BST-1.20220809.110241.83951661 etc which is tagged as ASCII – but is not.

This file had the trace for they Python file which was being run – including the name of the file.

If you cannot open a data set, amrc may help

I had a C program which opened a dataset and read from it. I enhanced it, by adding comments and other stuff, and after lunch it failed to open

I undid all my changes, and it still it failed to open! Weird.

I got message

  • EDC5061I
    • An error occurred when attempting to define a file to the system. (errno2=0xC00B0403)
    • Programmer response : Check the __amrc structure for more information. See z/OS XL C/C++ Programming Guide for more information on the __amrc structure.
  • C00B0403:
    • The filename argument passed to fopen() or freopen() specified dsname syntax. Allocation of a ddname for the dsname was attempted, but failed.
    • Programmer response: Failure information returned from SVC 99 was recorded in the AMRC structure. Use the information there to determine the cause of the failure.

This feels like the unhelpful messages Ive seen. “An error has occurred – we know what the error is – but we wont tell you” type messages.

To find the reason I had to add some code to my program.

 file =  fopen(fileName, mode     ); 
 __amrc_type save_amrc; 
 memcpy(&save_amrc,__amrc,sizeof(__amrc)); 
 printf("AMRC __svc99_info %hd error %hd\n",
         save_amrc.__code.__alloc.__svc99_info, 
         save_amrc.__code.__alloc.__svc99_error); 
                                                                                    

and it printed

AMRC __svc99_info 0 528

The DYNALLOC (dynamic allocation) which uses SVC 99 to allocate data sets, has a section Interpreting error reason codes from DYNALLOC. The meaning of 528 is Requested data set unavailable. The data set is allocated to another job and its usage attribute conflicts with this request.

And true enough, in one of the ISPF sessions in one of my TSO userid I was editing the file.

It looks like

printf(“__errno2 = %08x\n”, __errno2());

Would print the same information.

Thoughts

It appears that you cannot tell fopen to open it for read even if it has a write lock on it.

For DYNALLOC, if the request worked, these fields may have garbage in them – as I got undocumented values.

It would be nice if the developer of the fopen code produced messages like

EDC5061I: An error occurred when attempting to define a file to the system. (errno2=0xC00B0403) (AMRC=0x00000210)

Then it would be more obvious!