Compiling the TCP/IP samples on z/OS

Communications server (TCPIP) on z/OS provides some samples. I had problems getting these to compile, because the JCL in the documentation was a) wrong and b) about 20 years behind times.

Samples

There are some samples in TCPIP.SEZAINST

  • TCPS: a server which listens on a port
  • TCPC: a client which connects to a server using IP address and port
  • UDPC: C socket UDP client
  • UDPS: C socket UDP server
  • MTCCLNT: C socket Multitasking client
  • MTCSRVR: C socket Multitasking server
  • MTCCSUB: C socket subtask MTCCSUB

The JCL I used is

//COLCOMPI   JOB 1,MSGCLASS=H,COND=(4,LE) 
//S1          JCLLIB ORDER=CBC.SCCNPRC 
// SET LOADLIB=COLIN.LOAD 
// SET LIBPRFX=CEE 
// SET SOURCE=COLIN.C.SOURCE(TCPSORIG) 
//COMPILE  EXEC PROC=EDCCB, 
//       LIBPRFX=&LIBPRFX, 
//       CPARM='OPTFILE(DD:SYSOPTF),LSEARCH(/usr/include/)', 
// BPARM='SIZE=(900K,124K),RENT,LIST,RMODE=ANY,AMODE=31' 
//COMPILE.SYSLIB DD 
//               DD 
//               DD DISP=SHR,DSN=TCPIP.SEZACMAC 
//*              DD DISP=SHR,DSN=TCPIP.SEZANMAC  for IOCTL 
//COMPILE.SYSOPTF DD * 
DEF(_OE_SOCKETS) 
DEF(MVS) 
LIST,SOURCE 
TEST 
RENT ILP32        LO 
INFO(PAR,USE) 
NOMARGINS EXPMAC   SHOWINC XREF 
LANGLVL(EXTENDED) sscom dll 
DEBUG 
/* 
//COMPILE.SYSIN    DD  DISP=SHR,DSN=&SOURCE 
//BIND.SYSLMOD DD DISP=SHR,DSN=&LOADLIB. 
//BIND.SYSLIB  DD DISP=SHR,DSN=TCPIP.SEZARNT1 
//             DD DISP=SHR,DSN=&LIBPRFX..SCEELKED 
//* BIND.GSK     DD DISP=SHR,DSN=SYS1.SIEALNKE 
//* BIND.CSS    DD DISP=SHR,DSN=SYS1.CSSLIB 
//BIND.SYSIN DD * 
  NAME  TCPS(R) 
//START1   EXEC PGM=TCPS,REGION=0M, 
// PARM='4000          ' 
//STEPLIB  DD DISP=SHR,DSN=&LOADLIB 
//SYSERR   DD SYSOUT=*,DCB=(LRECL=200) 
//SYSOUT   DD SYSOUT=*,DCB=(LRECL=200) 
//SYSPRINT DD SYSOUT=*,DCB=(LRECL=200) 

Change the source

The samples do not compile with the above JCL. I needed to remove some includes

#include <manifest.h> 
// #include <bsdtypes.h> 
#include <socket.h> 
#include <in.h> 
// #include <netdb.h> 
#include <stdio.h> 

With the original sample I got compiler messages

ERROR CCN3334 CEE.SCEEH.SYS.H(TYPES):66 Identifier dev_t has already been defined on line 98 of “TCPIP.SEZACMAC(BSDTYPES)”.
ERROR CCN3334 CEE.SCEEH.SYS.H(TYPES):77 Identifier gid_t has already been defined on line 101 of “TCPIP.SEZACMAC(BSDTYPES)”.
ERROR CCN3334 CEE.SCEEH.SYS.H(TYPES):162 Identifier uid_t has already been defined on line 100 of “TCPIP.SEZACMAC(BSDTYPES)”.
ERROR CCN3334 CEE.SCEEH.H(NETDB):87 Identifier in_addr has already been defined on line 158 of “TCPIP.SEZACMAC(IN)”.


INFORMATIONAL CCN3409 TCPIP.SEZAINST(TCPS):133 The static variable “ibmcopyr” is defined but never referenced.

I tried many combinations of #define but could not get it to compile, unless I removed the #includes.

Compile problems I stumbled upon

Identifier dev_t has already been defined on line ...                                                     
Identifier gid_t has already been defined on line ...                                                     
Identifier uid_t has already been defined on line ....

This was caused by the wrong libraries in SYSLIB. I needed

  • CEE.SCEEH.H
  • CEE.SCEEH.SYS.H
  • TCPIP.SEZACMAC
  • TCPIP.SEZANMAC

The compile problems were caused by CEE.SCEEH.SYS.H being missing.

Execution problems

I had some strange execution problem when I tried to use AT-TLS within the program.

EDC5000I No error occurred. (errno2=0x05620062)

The errno2 reason from TSO BPXMTEXT 05620062 was

BPXFSOPN 04/27/18
JRNoFileNoCreatFlag: A service tried to open a nonexistent file without O_CREAT

Action: The open service request cannot be processed. Correct the name or the open flags and retry the operation.

Which seems very strange. I have a feeling that this field is not properly initialised and that this value can be ignored.

Running assembler control block chains in C

I needed to extract some information from z/OS in my C program. There is not a callable interface for the data, so I had to chain through z/OS control blocks.

Once you have an example to copy it is pretty easy – it just getting started which is the problem.

I have code (which starts with PSATOLD)

 #define PSA  540 
 char *TCB   = (char*)*(int*)(PSA); 
 char *TIO   = (char*)*(int*)(TCB + 12); 
 char *TIOE  = (char*)(TIO + 24) ; 
                                                            

  • At absolute address 540 (0x21C) is the address of the currently executing TCB.
  • (int *) (PSA) says treat this as an integer (4 byte) pointer.
  • * take the value of what this integer pointer points to. This is the address of the TCB
  • TCB + 12. Offset 12 (0x0c) in the TCB is the address of the Task I/O table (TCB IO)
  • (int *) says treat this as an integer ( 4 byte) pointer
  • * take the value of it to get to the TIOT
  • Offset 24 (0x18) the the location of the first TIO Entry in the control block

When I copied the code originally had char * (long * ) PSA. This worked fine on 31 bit programs but not on a 64 bit program as it uses 64 bit as an address – not 32 ! I had to use “int” to get it to work.

Another example, which prints the CPU TCB and SRB time used by each address space, is

// CVT Main anchor for many system wide control blocks
#define FLTCVT     16L
//  The first Address Space Control Block
#define CVTASCBH  564L
// the chain of ASCBs - next
#define ASCBFWDP    4L
//  offset to job info
#define ASCBEJST   64L
// the ASID of this address space
#define ASCBASID   36L


__int64 lTCB, lSRB; // could have used long long 
short ASID;  // 0x0000
char *plStor = (char*)FLTCVT;
char *plCVT  = (char*)*(int*)plStor;
char *plASCB = (char*)*(int*)(plCVT+CVTASCBH); // first ASCB
for( i=0; i<1000 & plASCB != NULL;
       i++, plASCB = (char*)*(int*)(plASCB+ASCBFWDP) )
{
  lTCB = *(__int64*)(plASCB+ASCBEJST) >> 12; // microseconds
  lSRB = *(__int64*)(plASCB+ASCBSRBT) >> 12; // microseconds
  ASID = *(short*)(plASCB+ASCBASID));
  printf("ASID=%4.4x TCB=%lld; SRB=%lld\n", ASID, lTCB, lSRB);
}

Creating a C external function for Python, an easier way

I wrote about my first steps in creating a C extension in Python. Now I’ve got more experience, I’ve found an easier way of compiling the program and creating a load module. It is not the official way – but it works, and is easier to do!

The traditional way of building a package is to use the setup.py technique. I’ve found just compiling it works just as well (and is slighly faster). You still need the setup.py for building Python source.

I set up a cp4.sh file

name=zos 
pythonSide='/usr/lpp/IBM/cyp/v3r8/pyz/lib/python3.8/config-3.8/libpython3.8.x' 
export _C89_CCMODE=1 
p1=" -DNDEBUG -O3 -qarch=10 -qlanglvl=extc99 -q64" 
p2="-Wc,DLL -D_XOPEN_SOURCE_EXTENDED -D_POSIX_THREADS" 
p2="-D_XOPEN_SOURCE_EXTENDED -D_POSIX_THREADS" 
p3="-D_OPEN_SYS_FILE_EXT                     -qstrict          " 
p4="-Wa,asa,goff -qgonumber -qenum=int" 
p5="-I//'COLIN.MQ930.SCSQC370' -I. -I/u/tmp/zpymqi/env/include" 
p6="-I/usr/lpp/IBM/cyp/v3r8/pyz/include/python3.8" 
p7="-Wc,ASM,EXPMAC,SHOWINC,ASMLIB(//'SYS1.MACLIB'),NOINFO " 
p8="-Wc,LIST(c.lst),SOURCE,NOWARN64,FLAG(W),XREF,AGG -Wa,LIST,RENT" 
/bin/xlc $p1 $p2 $p3 $p4 $p5 $p6 $p7 $p8  -c $name.c -o $name.o  -qexportall -qagg -qascii 
l1="-Wl,LIST=ALL,MAP,XREF     -q64" 
l1="-Wl,LIST=ALL,MAP,DLL,XREF     -q64" 
/bin/xlc $name.o  $pythonSide  -o $name.so  $l1 1>a 2>b 
oedit a 
oedit b 

This shell script creates a zos.so load module in the current directory.

You need to copy the output load module (zos.so) to a directory on the PythonPath environment variable.

What do the parameters mean?

Many of the parameters I blindly copied from the setup.py script.

  • name=zos
    • This parametrizes the script, for example $name.c $name.o $name.so
  • pythonSide=’/usr/lpp/IBM/cyp/v3r8/pyz/lib/python3.8/config-3.8/libpython3.8.x’
    • This is where the python side deck, for resolving links the to functions in the Python code
  • export _C89_CCMODE=1
    • This is needed to prevent the message “FSUM3008 Specify a file with the correct suffix (.c, .i, .s,.o, .x, .p, .I, or .a), or a corresponding data set name, instead of -o./zos.so.”
  • p1=” -DNDEBUG -O3 -qarch=10 -qlanglvl=extc99 -q64″
    • -O3 optimization level
    • -qarch=10 is the architectural level of the code to be produced.
    • –qlanglvl=extc99 says use the C extensions defined in level 99. (For example defining variables in the middle of a program, rather that only at the top)
    • -q64 says make this a 64 bit program
  • p2=”-D_XOPEN_SOURCE_EXTENDED -D_POSIX_THREADS”
    • The C #defines to preset
  • p3=”-D_OPEN_SYS_FILE_EXT -qstrict “
    • -qstrict Used to prevent optimizations from re-ordering instructions that could introduce rounding errors.
  • p4=”-Wa,asa,goff -qgonumber -qenum=int”
    • -Wa,asa,goff options for any assembler compiles (not used)
    • -qgonumber include C program line numbers in any dumps etc
    • -qenum=int use integer variables for enums
  • p5=”-I//’COLIN.MQ930.SCSQC370′ -I. -I/u/tmp/zpymqi/env/include”
    • Where to find #includes:
    • the MQ libraries,
    • the current working directory
    • the header files for my component
  • p6=”-I/usr/lpp/IBM/cyp/v3r8/pyz/include/python3.8″
    • Where to find #includes
  • p7=”-Wc,ASM,EXPMAC,SHOWINC,ASMLIB(//’SYS1.MACLIB’),NOINFO “
    • Support the use of __ASM().. to use inline assembler code.
    • Expand macros to show what is generated
    • List the data from #includes
    • If using__ASM__(…) where to find assembler copy files and macros.
    • Do not report infomation messages
  • p8=”-Wc,LIST(c.lst),SOURCE,NOWARN64,FLAG(W),XREF,AGG -Wa,LIST,RENT”
    • For C compiles, produce a listing in c.lst,
    • include the C source
    • do not warn about problems with 64 bit/31 bit
    • display the cross references (where used)
    • display information about structures
    • For Assembler programs generate a list, and make it reentrant
  • /bin/xlc $p1 $p2 $p3 $p4 $p5 $p6 $p7 $p8 -c $name.c -o $name.o -qexportall
    • Compile $name.c into $name.o ( so zos.c into zos.o) export all entry points for DLL processing
  • L1=”-Wl,LIST=ALL,MAP,DLL,XREF -q64″
    • bind pararameters -Wl, produce a report,
    • show the map of the module
    • show the cross reference
    • it is a 64 bit object
  • /bin/xlc $name.o $pythonSide -o $name.so $L1 1>a 2>b
    • take the zos.o, the Python side deck and bind them into the zos.so
    • pass the parameters defined in L1
    • output the cross reference to a and errors to b
  • oedit a
    • This will have the map, cross reference and other output from the bind
  • oedit b
    • This will have any error messages – it should be empty

Notes:

  • -qarch=10 is the default
  • the -Wa are for when compiling assembler source eg xxxx.s
  • –qlanglvl=extc99. EXTENDED may be better than extc99.
  • it needs the -qascii to work with Python.

When is an error not an err?

If you step off the golden path of trying to read a file – you can quickly end up in in trouble and the diagnostics do not help.

I had some simple code

FILE * hFile = fopen(...); 
recSize = fread(pBuffer ,1,bSize,hFile); 
if (recSize == 0)
{
  // into the bog!
 if (feof(hFile))printf("end of file\n");
 else if (ferror(hFile)) printf("ferror(hFile) occurred\n");
 else printf("Cannot occur condition\n");
 
}

When running a unit test of the error path of passing a bad file handle, I got the “cannot occur condition because the ferror() returned “OK – no problem “

The ferror() description is

General description: Tests for an error in reading from or writing to the specified stream. If an error occurs, the error indicator for the stream remains set until you close the stream, call rewind(), or call clearerr().
If a non-valid parameter is given to an I/O function, z/OS XL C/C++ does not turn the error flag on. This case differs from one where parameters are not valid in context with one another.

This gave me 0, so it was not able to detect my error. ( So what is the point of ferror()?)

If I looked at errno and used perror() I got

errno 113
EDC5113I Bad file descriptor. (errno2=0xC0220001)

You may think that I need to ignore ferror() and check errno != 0 instead. Good guess, but it may not be that simple.

The __errno2 (or errnojr – errno junior)) description is

General description: The __errno2() function can be used when diagnosing application problems. This function enables z/OS XL C/C++ application programs to access additional diagnostic information, errno2 (errnojr), associated with errno. The errno2 may be set by the z/OS XL C/C++ runtime library, z/OS UNIX callable services or other callable services. The errno2 is intended for diagnostic display purposes only and it is not a programming interface. The __errno2() function is not portable.
Note: Not all functions set errno2 when errno is set. In the cases where errno2 is not set, the __errno2() function may return a residual value. You may use the __err2ad() function to clear errno2 to reduce the possibility of a residual value being

If you are going to use __errno2 you should clear it using __err2ad() before invoking a function that may set it.

I could not find if errno is clean or if it may return a residual value, so to be sure to set it before every use of a C run time library function.

Having got your errno value what do you do with it?

There are #define constants in errono.h such as

#define EIO 122 /* Input/output error */

You case use if ( errno == EIO ) …

Like many products there is no mapping of 122 to “EIO”, but you can use strerror(errno) to map the errno to the error string like EDC5113I Bad file descriptor. (errno2=0xC0220001). This also provides the errno2 string value.

Using ASCII stuff in a C program

This is another topic which looks simple of the surface but has hidden depths. (Where “hidden depths” is a euphemism for “looks buggy”). It started off as one page, but by the time I had discovered the unexpected behaviours, it became 10 times the size. The decision tree to say if text will be printed in ASCII or EBCDIC was three levels deep – but I give a solution.

Why do you want to use ASCII stuff in a C program?

Java, Python and Node.js run on z/OS, and they use ASCII for the character strings, so if you are writing JNI interfaces for Java, or C external functions for Python you are likely to use this.

Topics covered in the log post

The high level view

You can define ASCII character strings in C using:

char * pA = 0x41424c30 ;  // ABC0
// or 
#pragma convert(819)
char * pASCII = "CODEPAGE 819 data" ;
#pragma convert(pop)

And EBCDIC strings (for example when using ASCII compile option)

#pragma convert("IBM-1047") 
char * pEBCDIC = "CODEPAGE 1047  data" ; 
#pragma convert(pop) 

You can define that the whole program is “ASCII” by using the -Wc,ASCII or -qascii options at compile time. This also gives you printf and other functions.

You can use requests like fopen(“myfile”…) and the code successfully handles the file name in ASCII. Under the covers I expect the code does __a2e_l() to covert from ASCII to EBCDIC, then uses the EBCDIC version of fopen().

The executable code is essentially the same – the character strings are in ASCII, and the “Flags” data is different (to reflect the different compile options). At bind time different stubs are included.

To get my program compiled with the -qascii option, to successfully print to the OMVS terminal, piped using |, or redirect using 1>, I had to using the C run time function fcntl() to set the code page and tag of STDOUT and STDERR. See here below.

You need to set both STDERR and STDOUT – as perror() prints to STDERR, and you may need this if you have errors in the C run time functions.

Some of the hidden depths!

I had a simple “Hello World” program which I compiled in USS with the -qascii option. Depending on how I ran it, I got “Hello World” or “çÁ%%?-ï?Ê%À” (Hello World in ASCII).

  • straight “./prog”. The output in ASCII
  • pipe “./prog | cat”. If I used the environment variable _TAG_REDIR_OUT=”TXT” the output was in EBCDIC – otherwise it came out in ASCII.
  • redirect to a file “./prog 1> aaa”. Depending on the environment variable _BPXK_AUTOCVT=”ON”, and if the file existed or nor, and if the files existed and was tagged. The output could be in EBCDIC or ASCII!

So all in all – it looks a bit of a mess.

Background to outputting data

Initially I found printing of ASCII data was easy; then I found what I had written sometimes did not work, and after a week or so I had a clearer idea of what is involved, then I found a few areas which were even more complex. You may want to read this section once, read the rest of the blog post, then come back to this section.

A file can have

  • a tag – this is typically “this file is binary|text|unknown”
  • the code page of the data – or not specified.
  • you can use ls -T file.name to display the tag and code page of a file.

Knowing that information…

  • If a file has a tag=text and an ASCII code page,
    • if the _BPXK_AUTOCVT=”ON” environment flag is set it will display in EBCDIC (eg cat…)
    • else (_BPXK_AUTOCVT=”OFF”) it will display unreadable ASCII (eg cat…)
  • If a file has a tag=binary then converting to a different code page makes no sense. For example a .jpeg or load module. Converting a load module would change the instructions Store Half Word(x40) to a Load Positive (x20).
  • If a file is not tagged – it becomes a very fuzzy area. The output is dependant on the program running. An ASCII program would output as ASCII, and an EBCDIC would output as EBCDIC.
  • ISPF edit is smart enough to detect the file is ASCII – and enable ISPF edit command “Source ASCII”.

Other strands to the complexity

  • Terminal type
    • If you are logged on using rlogin, you can use chcp to change the tag or code page of your terminal.
    • If you are logged in through TSO – using OMVS, you cannot use chcp. I can’t find a command to set the tag or code page, but you can set it programmatically.
  • Redirection
    • You can print to the terminal or redirect the output to a file for example ./runHello 1>output.file .
    • The file can be created with the appropriate tag and code page.
    • You can use the environment variables _TAG_REDIR_OUT=TXT|BIN and _TAG_REDIR_ERR=TXT|BIN to specify what the redirected output will be.
    • If you use _TAG_REDIR_OUT=TXT, the output is in EBCDIC.
      • you can use ./prog | cat to take the output of prog and pipe it through cat to display the lines in EBCDIC.
      • you can use ./prog |grep ale etc. For me this displayed the EBCDIC line with Locale in it.
    • You can have the situation where if you print to the terminal it is in ASCII, if you redirect, it looks like EBCDIC!
  • The tag and ccsid are set on the first write to the file. If you write some ASCII data, then set the tag and code page, the results are unpredictable. You should set the tag and ccsid before you try to print anything.

What is printed?

A rough set of rules to determine what gets printed are

  • If the output file has a non zero code page, and _BPXK_AUTOCVT=”ON” is set, the output the data converted from the code page to EBCDIC.
  • If the output file has a zero code page, then use the code page from the program. A program complied with ASCII will have an ASCII code page, else it will default to 1047.

A program can set the tag, and code page using the fcntl() function.

Once the tag and code page for STDOUT and STDERR have been set, they remain set until reset. This leads to the confusing sequence of commands and output

  • run my ascii compiled “hello world” program – the data comes out as ASCII.
  • run Python3 –version which sets the STDOUT code page.
  • rerun my ascii compiled “hello world” program – the data comes out as EBCDIC! This is because the STDOUT code page was set by Python (and not reset).

I could not find a field to tell me which code page should be used for STDOUT and STDERR, so I suggest using 1047 unless you know any better.

Using fcntl() to display and set the code page of the output STDOUT and STDERR

To display the information for an open file you can use the fcntl() function. The stat() function also provide information on the open file. They may not be entirely consistent, when using pipe or redirect.

#include <fcntl.h> 
...
struct f_cnvrt f; 
f.cvtcmd = QUERYCVT; 
f.pccsid = 0; 
f.fccsid = 0; 
int action = F_CONTROL_CVT; 
rc =fcntl( STDOUT_FILENO,action, &f ); 
...
switch(f.cvtcmd) 
{ 
  case QUERYCVT : printf("QUERYCVT - Query"); break; 
  case SETCVTOFF: printf("SETCVTOFF -Set off"); break; 
  case SETCVTON : printf("SETCVTON -Set on unconditionally"); break; 
  case SETAUTOCVTON : printf("SETAUTOCVTON - Set on conditionally"); break; 
  default:  printf("cvtcmd %d",f.cvtcmd); 
} 
printf(" pccsid=%hd fccsid=%hd\n",f.pccsid,f.fccsid); 

For my program compiled in EBCDIC the output was

SETCVTOFF -Set off pccsid=1047 fccsid=0

This shows the program was compiled as EBCDIC (1047), and the STDOUT ccsid was not set.

When the ascii compiled version was used, the output, in ASCII was

SETCVTON -Set on unconditionally pccsid=819 fccsid=819

This says the program was ASCII (819) and the STDOUT code page was ASCII (819).

When I reran the ebcdic complied version, the output was

SETCVTOFF -Set off pccsid=1047 fccsid=0

Setting the code page

printf("before text\n");
f.cvtcmd = SETCVTON ; 
f.fccsid = 0x0417  ;  // code page 1047
f.pccsid = 0x0000  ;  // program 333 = ascii, 0 take default 
rc =fcntl( STDOUT_FILENO,action, &f ); 
if ( rc != 0) perror("fcntl"); 
printf("After fcntl\n"); 

The first time this ran as an ASCII program, the “before text” was not readable, but the “After fcntl” was readable. The second time it was readable. For example, the second time:

SETCVTON -Set on unconditionally pccsid=819 fccsid=1047
before text
After fcntl

You may want to put some logic in your program

  • use fcntl and f.cvtcmd = QUERYCVT for STDOUT and STDERR
  • if fccsid == 0 then set it to 1047, using fcntl and f.cvtcmd = SETCVTON

Set the file tag

This is needed to set the file tag for use when output is piped or redirected.

void settag(int fileNumber) 
{ 
  int rc; 
  int action; 
  struct file_tag ft; 
  memset(&ft,sizeof(ft),0); 
  ft.ft_ccsid = 0x0417; 
  ft.ft_txtflag = 1; 
  ft.ft_deferred = 0;  // if on then use the ccsid from the program! 
  action = F_SETTAG; 
  rc =fcntl( fileNumber,action, &ft); 
  if ( rc < 0) 
  { 
     perror("fctl f_settag"); 
     printf("F_SETTAG %i %d\n",rc,errno); 
  } 
}

What gets printed (without the fcntl() code)?

It depends on

  • if you are printing to the terminal – or redirecting the ouptut.
  • if the STDOUT fccsid is set to 1047
  • If _BPXK_AUTOCVT=”ON”

Single program case normal program

If you have a “normal” C program and some EBCDIC character data, when you print it, you get output like “Hello World”.

Single program case ASCII program or data – fccsid = 0

If you have a C program, compiled with the ASCII option, and print some ASCII data, when you print it you get output like ø/ËËÁÀ-À/È/-

Single program case ASCII program or data – fccsid = 1047

If you have a C program, compiled with the ASCII option, and print some ASCII data, when you print it you get output output like “Hello World”.

Program with both ASCII and EBCDIC data – compiled with ASCII

With a program with

#pragma convert("IBM-1047") 
 char * pEBCDIC ="CODEPAGE 1047  data"  ; 
#pragma convert(pop) 
printf("EBCDIC:%s\n",pEBCDIC); 

#pragma convert(819) 
char * pASCII = "CODEPAGE 819 data" ; 
#pragma convert(pop) 
printf("ASCII:%s\n",pASCII); 

When compiled without ascii it produces

EBCDIC:CODEPAGE 1047 data
ASCII:ä|àá& åá—–À/È/

When compiled with ascii it produces

EBCDIC:ÃÖÄÅ×ÁÇÅ@ñðô÷@@–£-
ASCII:CODEPAGE 819 data

Mixing programs compiled with/without the ASCII options

If you have a main program and function which have been compiled the same way – either both with ASCII compile option, or both without, and pass a string to the function to print, the output will be readable “Hello World”.

If the two programs have been compiled one with, and one without the ASCII options, an ASCII program will try to print the EBCDIC data, and an EBCDIC program will try to print the ASCII data.

If you know you are getting the “wrong” code page data, you can use the C run time functions __e2a_l or __a2e_l to convert the data.

Decoding/displaying the ASCII output

If you pipe the output to a file, for example ./cp2.so 1>a you can display the contents of the file converted from ASCII using

  • the “source ascii” command in ISPF edit, or
  • oedit . , and use the “ea” (edit ascii) or “/” line command

The “source ascii” works with SDSF output, with the SE line command.

Several times I had a file with EBCDIC data but the file had been tagged as ASCII. I used chtag -tc IBM-1047 file.name to reset it.

Under the covers

At compile time

There is a predefined macro __CHARSET_LIB which is is defined to 1 when the ASCII compiler option is in effect and to 0 when the ASCII compile option is not used.

There is C macro logic like defines which function to use

if (__CHARSET_LIB == 1) 
#pragma map (printf, "@@A00118")
else 
#pragma map (printf, "printf")

A program compiled with the ASCII option will use a different C run time module (@@A00118), compared with the non ASCII mode, which will use printf.

The #pragma map “renames” the function as input to the binder.

This is known as bimodal and is described in the documentation.

I want my program to be bimodal.

Bimodal is where your program can detect if it is running as ASCII or EBCDIC and make the appropriate decision. This can happen in you have code which is #included into the end user’s program.

This is documented here.

You can use

#define _AE_BIMODAL 1 
if (__isASCII()) // detect the run time
   __printf_a(format, string);  // use the ascii version
else
   __printf_e(format, string); // use the ebcdic version

If you cannot open a data set, amrc may help

I had a C program which opened a dataset and read from it. I enhanced it, by adding comments and other stuff, and after lunch it failed to open

I undid all my changes, and it still it failed to open! Weird.

I got message

  • EDC5061I
    • An error occurred when attempting to define a file to the system. (errno2=0xC00B0403)
    • Programmer response : Check the __amrc structure for more information. See z/OS XL C/C++ Programming Guide for more information on the __amrc structure.
  • C00B0403:
    • The filename argument passed to fopen() or freopen() specified dsname syntax. Allocation of a ddname for the dsname was attempted, but failed.
    • Programmer response: Failure information returned from SVC 99 was recorded in the AMRC structure. Use the information there to determine the cause of the failure.

This feels like the unhelpful messages Ive seen. “An error has occurred – we know what the error is – but we wont tell you” type messages.

To find the reason I had to add some code to my program.

 file =  fopen(fileName, mode     ); 
 __amrc_type save_amrc; 
 memcpy(&save_amrc,__amrc,sizeof(__amrc)); 
 printf("AMRC __svc99_info %hd error %hd\n",
         save_amrc.__code.__alloc.__svc99_info, 
         save_amrc.__code.__alloc.__svc99_error); 
                                                                                    

and it printed

AMRC __svc99_info 0 528

The DYNALLOC (dynamic allocation) which uses SVC 99 to allocate data sets, has a section Interpreting error reason codes from DYNALLOC. The meaning of 528 is Requested data set unavailable. The data set is allocated to another job and its usage attribute conflicts with this request.

And true enough, in one of the ISPF sessions in one of my TSO userid I was editing the file.

It looks like

printf(“__errno2 = %08x\n”, __errno2());

Would print the same information.

Thoughts

It appears that you cannot tell fopen to open it for read even if it has a write lock on it.

For DYNALLOC, if the request worked, these fields may have garbage in them – as I got undocumented values.

It would be nice if the developer of the fopen code produced messages like

EDC5061I: An error occurred when attempting to define a file to the system. (errno2=0xC00B0403) (AMRC=0x00000210)

Then it would be more obvious!

There are many ways to fail to read a file in a C program.

The 10 minute task to read a file from disk using a C program took a week.

There are several options you can specify on the C fopen() function and it was hard to find the one that worked. I basically tried all combinations. Once I got it working, I tried on other files, and these failed, so my little task turned into a big piece of research.

Depending on the fopen options and the use of fgets or fread I got different data when getting from a dataset with two records in it!

  • AAAA – what I wanted.
  • AAAAx’15’ – with a hex 15 on the end.
  • AA
  • AAAAx’15’BBBBx’15’- both the records, with each record terminated in a line-end x’15’.
  • AAAABBBB – both records with no line ends.
  • x’00140000′ x’00080000’AAAAx’000080000’BBBB.
    • This has a Block Descriptor Word (x’00140000′) saying the length of the block is x14.
    • There is a Record Descriptor Word (RDW) (X’00080000′) of length x’0008′ including the 4 bytes for the RDW.
    • There is the data for the first record AAAA,
    • A second RDW
    • The second data
    • The size of this record is 4 for the BWD, 4 for the first RDW, 4 for the AAAA, 4 for the second RDW, and 4 for the data = 20 bytes = x14.
  • Nothing!

Sections in this blog post

There are different sorts of data

  • z/OS uses data sets which have records. Records can be blocked (for better disk utilisation) It is more efficient to write one block containing 40 * 80 bytes records than write 40 blocks each of 80 bytes. You can treat the data as one big block (of size 3200 bytes) – or as 40 blocks of 80 (or 2 blocks of 1600 …) . With good blocking you can get 10 times the amount of data onto a disk. (If you imagine each record on disk needs a 650 byte header you will see why blocking is good).
  • You can have “text” files, which contain printable characters. They often use the new line character (x’15’) to indicate end of line.
  • You can have binary files which should be treated as a blob. In these characters line x’15’ do not represent new line. For example it might just be part of a sequence x’13141516′.
  • Unix (and so OMVS) uses byte-addressable storage.
    • “Normal files” have data in EBCDIC
    • You can use enhanced ASCII. Where you can have files with data in ASCII (or other code page). The file metadata contains the type of data (binary or text) and the code page of the text data.
    • With environment _BPXK_AUTOCVT you can have automatic conversion which converts a text file in ASCII to EBCDIC when it is read.

From this you can see that you need to use the correct way of accessing the data(records or byte addressable), and of using the data(text or binary).

Different ways of getting the data

You can use

  • the fread function which reads the data, according to the file open options (binary|text, blocked)
  • the fgets function which returns text data (often terminated by a line end character).

Introduction to fopen()

The fopen() function takes a file name and information on how the file should be opened and returns a handle. The handle can be used in an fread() function or to provide information about the file. The C runtime reference gives the syntax of fopen(), and there is a lot of information in the C Programming guide:

There is a lot of good information – but it didn’t help me open a file and read the contents.

The file name

The name can be, for example:

  • “DD:SYSIN” – a reference to a data set via JCL
  • “//’USER.PROCLIB(MYPROC)'” – a direct reference to a dataset
  • “/etc/profile” – an OMVS file.

If you are using the Unix shell you need to use “//’datasetname'” with both sets of quotes.

The options

This options string has one or more parameters.

The first parameter defines the operation, read, write, read+write etc. Any other parameters are in the format keyword=format.

The read operation can be

  • “rb” to read a binary file. In a binary file, the data is treated as a blob.
  • “rt” to read a text file. A text file can have auto conversion see FILETAG C run time option.
  • “r”

The type can be

  • type=record – read a record of data from the disk, and give the record to the application
  • type=blocked. It is more efficient in disk storage terms, to build up a record of 10 * 80 byte records into one 800 byte record (or even 3200 byte records). When a blocked record is read, the read request returns the first N bytes, then the next N bytes. When the end of the block is reached it will get retrieve the next block from disk.
  • not specified, which implies byte access, rather than record access, and use of fgets function.

An example fopen

FILE * f2 fopen(“//’COLIN.VB'”,”rt type=blocked”) ;

How much data is read at a time?

Depending on the type of data, and the open options a read request may return data

  • up to the end of the record
  • up to and including the new-line character
  • up to the size of the buffer you give it.

If you do not give it a big enough buffer you may have to issue several reads to get whole logical record.

The system may also “compress” data, for example remove trailing blanks from the end of a line. If you were expecting an 80 byte record – you might only get 20 bytes – so you may need to check this, and fill in the missing blanks if needed.

Reading the data

You can use the fread() function

The fread() parameters are:

  • the address of a buffer to hold the data
  • the unit of the buffer block size
  • the number of buffer blocks
  • the file handle.

It returns the number of completed buffer blocks used.

It is usually used

#define lBuffer 1024

char buffer[lBuffer];
size_t len = fread(buffer, 1 ,lBuffer ,fHandle );

This says the unit of the buffer is 1 character, and there are 1024 of them.

If the record returned was 80 bytes long, then len would be 80.

If it was written as

size_t len = fread(buffer, lBuffer, 1 ,fHandle );

This says the unit of buffer is 1024 – and there is one of them. The returned “length” is 0, as there were no full 1024 size buffers used.

It appears that a returned length of 0 means either end of file or a file error. You can tell if the read is trying to go past the end of the file (feof()) or was due to a file error (ferror()).

if (feof(hFile))…
else if (ferror(hFile))
{ int myerror = errno;
perror(“Colins error”);

}

You can use the fgets function.

The fgets() parameters are:

  • the address of a buffer to hold the data
  • the size of the buffer
  • the file handle.

This returns a null terminated string in the buffer. You can use strlen(buffer) to get the length of it. Any empty string would have length 0.

fgets()returns a pointer to the string, or NULL if there is an error. If there is an error you can use feof() or ferror() as above.

Results from reading data sets and files

I did many tests with different options, and configurations, and the results are displayed below.


The following are using in the tables:

  • Run.
    • job. The program was run in JCL using BPXBATSL or via PGM=…
    • unix shell – the program was run in OMVS shell.
  • BPXAUTO. This is used in OMVS shell. The environment variable _BPXK_AUTOCVT can have
    • “ON” – do automatic conversion from ASCII to EBCDIC where application
    • “OFF” do not do automatic conversion from ASCII to EBCDIC
  • open. The options used.
    • r readrb read binaryrt read text
    • “t=r” is shorthand for type=record
  • read. The C function used to read the data.
    • fread – this is the common read function. It can read text files and binary files
    • fgets – this is used to get text from a file. The records usually have the new-line character at the end of the data.
    • aread is a C program which uses fread – but the program is compiled with the ASCII option.
  • data
    • E is the EBCDIC data I was expecting. For example reading from a FB dataset member gave me an 80 byte record with the data in it.
    • E15 is EBCDIC data, but trailing blanks have been removed. The last character is an EBDCIC line end (x15).
    • A0A. The data is returned in ASCII, and has a trailing line end character (x0A). You can convert this data from ASCII to EBCDIC using the C run time function __a2e_l(buffer,len) .
    • Abuffer – data up to the size of the buffer ( or size -1 for fgets) the data in ASCII.
    • Ebuffer – data up to the size of the buffer ( or size -1 for fgets) the data in EBCDIC. This data may have newlines within the buffer

To read a dataset

This worked with a data set and inline data within a JOB.

RunBPXAUTOopenreaddata
JobNArb,t=rfreadE
r|rtfreadEbuffer
rfgetsE15
rtfgetsE15
Unix shellAnyrb,t=rfreadE
r|rtfreadEbuffer
rfgetsE15
rtfgetsE15

Read a normal(EBCDIC) OMVS file

RunBPXAUTOopenreaddata
JobOFFrb,t=rfreadE
offrtfgetsE15
off*freadEbuffer
Unix shelloffrfgetsE15
offrtfgetsE15
offrbfgetsE15
off*freadE15
onrb,t=rfreadE
onrbfgetsE15

Read an ASCII file in OMVS

RunBPXAUTOopenreaddata
JoboffragetsA0A
offrbagetsA0A
offrtagetsA0A
Unix shellonrfgetsE15
onrbfgetsE15
onrtfgetsE15
on*freadbuffer
offragetsA0A
offrbagetsA0A
offrtagetsA0A
off*freadAbuffer

To read a binary file

To read a data set

RunBPXAUTOopenreaddata
JoboffrbfreadE
Unix shellonrbfreadE

Read a binary normal(EBCDIC) OMVS file

RunBPXAUTOopenreaddata
JoboffrbfreadE
Unix shellonrbfreadE
offrbfreadE

Read a binary ASCII file in OMVS

If use list the attributes of a binary file, for example “ls -T ecrsa.p12” it gives

b binary T=off ecrsa1024.p12

Which shows it is a binary file

RunBPXAUTOopenreaddata
Joboffrb,t=rfreadE
Unix shellonrb,=rfreadE
offrb,t=rfreadE

Reading data sets from a shell script

If you try to use fopen(“DD:xxxx”…) from a shell script you will get

FOPEN: EDC5129I No such file or directory. (errno2=0x05620062)

If you use fopen(“//’COLIN.VB'”…) and specify a fully qualified dataset name if will work.

fopen(“//VB”..) will put the RACF userid in front off they name. For example attempt to open “//’COLIN.VB.'”

How big a buffer do I need?

The the buffer is not large enough to get all of the data in the record, the buffer will be filled up to the size of the data. For example using fgets and a 20 byte buffer, 19 characters were returned. The 20th character was a null. The “new-line” character was present only when the end of the record was got.

How can I tell the size of the buffer I need?

There is data available which helps – but does not give the whole picture.

With the C fldata() — Retrieve file information function you can get information from the file handle such as

  • the name of the dataset (as provided at open time)
  • record format – fixed, variable
  • dataset organisation (dsorg) – Paritioned, Temporary, HFS
  • open mode – text, binary, record
  • device type – disk, printer, hfs, terminal
  • blksize
  • maximum record length

With fstat(), fstat64() — Get status information from a file handle and lstat(), lstat64() — Get status of file name or symbolic link you can get information about an OMVS file name (or file handle). This has the information available in the OMVS “ls” command. For example

  • file serial number
  • owner userid
  • group userid
  • size of the file
  • time last access
  • time last modified
  • time last file status changed
  • file tag
    • ccsid, value or 0x000 for untagged, or 0xffff for binary
    • pure text flag.

Example output

For data sets and files (along the top of the table)

  • VB a sequential data set Variable Blocked
  • FB a member of user.proclib which is Fixed Block
  • SYSIN inline data in a job
  • Loadlib a library containing load modules (binary file)
  • OMVS file for example zos.c
  • ASCII for example z.py which has been tagged as ASCII

Other values

  • V variable format data
  • F fixed format data
  • Blk it has blocked records
  • B “Indicates whether it was allocated with blocked records”. I do not know the difference between Blk and V
  • U Undefined format
  • PS It is a Physical Sequential
  • PO It is partitioned – it as members
  • PDSE it is a PDSE
  • HFS it is a file from OMVS
VBFBSYSINloadibOMVS fileASCII
fldatarecfmV Blk BF Blk BF Blk BUUU
dsorgPSPO PDSMemPSPO PDSMem PDSEHFSHFS
devicediskdiskotherdiskHFSHFS
blocksize614461080640000
maxreclen10248080640010241024
statccsid00000819
file size00004780824

To read a record, you should use the maxreclen size. This may not be the size of the data record but it is the best guess.

It look like the maxreclen for Unix files is 1024 – I guess this is the page size on disk.

One minute MVS: DLLs and load modules

This is one of a series of short blog posts on z/OS topics. This post is about using Dynamic Link Library. It follows on from One minute MVS: Binder and loader which explains some of the concepts used.

Self contained load modules are good but…

An application may have a main program and use external functions bound to the main program. This is fine for the simple case. In more complex environments where a program loads external modules this may be less effective.

If MOD1 and MOD2 are modules and they both have common subroutines MYSUBS, then when MOD1 and MOD2 are loaded, they each have a copy of MYSUBS. This can lead to inefficient use of storage (and not as fast as one copy which every thing uses).

You may not be responsible for MYSUBS, you just use the code. If they are bound to your load module, then in order to use a later version you will have to rebind your load modules. This is not very usable.

Stubs are better

There are several techniques to get round these problems.

In your program you can use a stub module. This sits between your application and the code which does all of the work.

  • For example for many of the z/OS provided facilities, the stub modules chain through z/OS control blocks to find the address of the routine to use.
  • You can have the subroutine code in a self-contained load module. Your stub can load the module then branch to it. This way you can use the latest available version of code. For example MQCONN loads a module, other MQ verbs use the module, MQDISC releases the load module.

If the stub loads a module, then other threads using the stub just increment the in-use count and do not have to reload the module (from disk). This means you do not have multiple copies of the load module in memory.

The Dynamic Link Library support does this load of the module for you, is it a build option rather than change your code.

Creating a DLL Object

When you compile the C source you need to have options DLL and EXPORTALL. (The documentation says it is better to use #PRAGMA EXPORT(name) for each entry, rather than use the EXPORTALL option – but this means making a small change to your program).

When you bind the module you need parameter DYNAM=DLL, and specify //BIND.SYSDEFSD DD… pointing to a member, for example COLIN.SIDE(CERT)

The member will have information (known as a side deck) with information about the functions your program has exposed. For example

 IMPORT DATA,'CERT','ascii_tab' 
 IMPORT DATA,'CERT','other_tab'  
 IMPORT CODE,'CERT','isbase64'                       
 IMPORT CODE,'CERT','printHex'                    
 IMPORT CODE,'CERT','UnBase64'                    f

Where

  • IMPORT …. ‘CERT’ says these are for load module CERT
  • IMPORT DATA – these labels are data constants
  • IMPORT CODE – these are functions within CERT

If this was a 64 bit program, it would have IMPORT CODE64, and IMPORT DATA64.

If you compile in Unix Services you get a side-deck file like /u/tmp/console/cert.x containing

IMPORT DATA64,'cert.so','ascii_tab'                                         IMPORT CODE64,'cert.so','isbase64'
...

and the load module /u/tmp/console/cert.so.

The .x file is only used by the binder. The .so file is loaded and used by programs.

When your program tries to use one of these functions, for example isbase64, module CERT(cert.so) is loaded and used. This means you need the library containing this module to be available to the job.

Binding with the DLL

Instead of binding your program with the CERT object. You bind it with the side deck and use the equivalent to INCLUDE COLIN.SIDE(CERT).
The binder associates the external reference isbase64 with the code in load module CERT.

Conceptually the isbase64 reference, points to some stub code which does

  • Load module CERT
  • Find entry point isbase64 within this module
  • Replace the address of the isbase64 external reference, so it now points to the real isbase64 entry point in module CERT.
  • Branch to the real isbase64 code.

The next time isbase64 is used – it goes directly to the isbase64 function in the CERT module.

Using DLLs

You can use DLLs in Unix, or normal address spaces.

For a program running in Unix services you can use the C run time functions

  • dlopen(modules_name)
  • dlclose(dlHandle) release the module
  • dlsym(dlHandle,function_name|variable_name) to get information about a function name or variable
  • dlerror() to get diagnostic information about the last dynamic link error

Creating a C header file from an assembler DSECT

I was writing a program to display information about a RACF keyring, using the RACF callable services. RACF provides a mapping for the structures – but this is an assembler macro, and no C header file is available.
Instead of crafting the header file by hand, I thought I would try the C DSECT conversion utility. This was pretty easy to use (once I had got it working!). But not 100% reliable.

The JCL is

//COLINZP  JOB 1,MSGCLASS=H 
//         JCLLIB ORDER=CBC.SCCNPRC 
//DSECT EXEC PROC=EDCDSECT, 
// INFILE=COLIN.C.SOURCE(DSECTA), 
// OUTFILE=COLIN.C.H.VB(TESTASM), 
// DPARM='EQU(BIT),LP64' 
/* 

The LP64 tells it to generate the header file for an LP64 program. It will generate pointers with __ptr32.

COLIN.C.SOURCE(DSECTA) has

AAAAA  CSECT 
*      IRRPCOMP 
AAA    DS  CL4 EYE CATCHER 
ABBB   DC  CL4'ABBB' 
       END 

The name of the structure is taken from the CSECT or DSECT the code is in. The above code produces

#pragma pack(packed)                                                    
                                                                        
struct aaaaa {                                                          
  unsigned char  aaa[4];  /* EYE CATCHER */                             
  unsigned char  abbb[4];                                               
  };                                                                    
                                                                        
#pragma pack(reset)                                                     

You needed an output dataset with Variable Blocked format. When I used a fixed block, the output was in columns 1 to 79, but by default C reads columns 1-72 for a Fixed Block file.

The JCL invokes the HLASM, and creates an ADATA file. This file has information about all of the data and instructions used. This ADATA file is passed through the CCNEDSCT program which generates C source from the ADATA information.

There are many negative comments in the internet about CCNEDSCT. For example it does not pass block comments through.

I found it a bit buggy.

The source

AAAAA  CSECT 
AAA    DS  CL4 EYE CATCHER 
ABBB   DC  CL4'ABBB' 
ACCC   DC  CL4'ACCC' 
ASIZE  EQU *-AAA 
       END 

gave me

#pragma pack(packed) 
                                                          
struct aaaaa { 
  unsigned char  aaa[4];     /* EYE CATCHER */ 
  unsigned char  abbb[4]; 
  unsigned int          : 4, 
                 asize  : 2, 
                       : 26; 
  }; 
                                                          
#pragma pack(reset) 

Which is clearly wrong, as it is missing variable ACCC, and ASIZE is a bit field within an integer.

More information about ADATA.

The data is laid out as described in macro (on my system) HLA.SASMMAC1(ASMADATA).

One minute MVS: Binder and loader

This topic is in the series of “One minute MVS” giving the essentials of a topic.

Your program

The use of functions or subroutines are very common in programming. For a simple call

x = mysub()

which calls an external function mysub has generated code like

MYPROG  CSECT     
     L 15,mysub the function 
     LA 1,PARMLIB
     BASR  14,15 or BALR in older programs 
...
mysub  DC  V(MYSUB)

where

  • MYPROG is an entry point to the program
  • the mysub variable defines some storage for the external reference(ER) to MYSUB.

The output of the assembler or compiler is a file or dataset member, known as an an “object deck” or “object file”. It cannot be executed, as it does not have the external functions or subroutines.

The binder (or linkage editor)

The binder program takes object decks, includes any subroutines and external code and creates a load module or program object.

In early days load modules were stored in PDS datasets. In the directory of a member was information about the size of the load module, and the entry point. As the binder got more sophisticated, the directory did not have enough space for all of the data that was created. As a result PDSE (Extended PDSs) were created, which have an extendable directory entry. For files in Unix Services Load modules are stored in the the Unix file system.

The term Program Object is used to cover load modules and files in the Unix file system. I still think of them both as Load Modules.

The binder takes the parts needed to create the program object, for example functions you created and are stored in a PDS or Unix, and includes code, for example for the prinf() function. These are merged into one file.

Pictorially the merged files look like

  • Offset 0 some C code.
  • Offset 200 MYPROG Object
    • Offset 10 within MYPROG, MYPROG entry point (so offset 210 from the start of the merged files)
    • Offset 200 within MYPROG, mysub:V(MYSUB)
    • Offset 310 within MYPROG end of MYPROG
  • Offset 512 FUNCTION1 object
  • Offset 800 MYSUB1 Object
    • Offset 28 within MYSUB1, MYSUB entry point
    • Offset 320 within MYSUB1, end of MYSUB

The binder can now resolve references. It knows that MYSUB entry point is at offset 28 within MYSUB1 object, and MYSUB1 Object is 800 from the start of the combined files. It can now replace the mysub:V(MYSUB) in MYPROG with the offset value 828.

The entire files is stored as a load module(program object) as one object, with a name that you give it, for example COLIN.

The loader

When load module COLIN is loaded. The loader loads the load module from disk into memory. For example at address 200,000. As part of the loading, it looks at the external references and calculates the address in memory from the offset value. So 200,000 + offset 828 is address 200828. This value is stored in the mysub variable.

When the function is about to be called via L 15,mysub, register 15 has the address of the code in memory and the program can branch to execute the code.

It gets more complex than this

Consider two source programs

int value = 0;
int total = 0;
void main()
{
  value =1;
  total = total + value; 
  printTotal();
  
}
int total;
int done;
void printotal()
{
  printf("Total = %d\n",total);
  done = 1; 
}

There are some global static variables. The variable “total” is used in each one – it is the same variable.

These programs are defined as being re-entrant, and could be loaded into read only storage.

The variables “value” and “total”, cannot go into read only storage as they change during the execution of the program.

There are three global variables: “value”, “total” and “done”; total is common to both programs.

These variables go into a storage area called Writeable Static Area (WSA).

If there are multiple threads running the program, each gets its own copy of the WSA, but they can all shared instructions.

A program can also have 31 bit resident code, and 64 bit resident code. The binder takes all of these parts and creates “classes” of data

  • The WSA class. This contains the merged list of static variables.
  • 64-bit re-entrant code – class. It takes the 64-bit resident code from all of the programs, and included subroutines and creates a “64-bit re-entrant” blob.
  • 31- bit re-entrant code -class. It takes the 31-bit resident code from all of the programs, and included subroutines and creates a “31-bit re-entrant” blob.
  • 64-bit data – class, from all objects
  • 31-bit data – class, from all objects

When the loader loads the modules

  • It creates a new copy of the WSA for each thread
  • It loads the 64 bit re-entrant code (or reuses any existing copy of the code) into 64 bit storage
  • It loads the 31 bit re-entrant code (or reuses any existing copy of the code) into 31 bit storage.

How can I see what is in the load module?

If you look at the output from the binder you get output which includes content like

CLASS  B_TEXT            LENGTH =      4F4  ATTRIBUTES = CAT,   LOAD, RMODE=ANY 
CLASS  C_DATA64          LENGTH =        0  ATTRIBUTES = CAT,   LOAD, RMODE=ANY 
CLASS  C_CODE64          LENGTH =     1A38  ATTRIBUTES = CAT,   LOAD, RMODE= 64 
CLASS  C_@@QPPA2         LENGTH =        8  ATTRIBUTES = MRG,   LOAD, RMODE= 64 
CLASS  C_CDA             LENGTH =     3B50  ATTRIBUTES = MRG,   LOAD, RMODE= 64 
CLASS  B_LIT             LENGTH =      140  ATTRIBUTES = CAT,   LOAD, RMODE=ANY 
CLASS  B_IMPEXP          LENGTH =      A6B  ATTRIBUTES = CAT,   LOAD, RMODE=ANY 
CLASS  C_WSA64           LENGTH =      6B8  ATTRIBUTES = MRG, DEFER , RMODE= 64 
CLASS  C_COPTIONS        LENGTH =      304  ATTRIBUTES = CAT, NOLOAD 
CLASS  B_PRV             LENGTH =        0  ATTRIBUTES = MRG, NOLOAD 

Where

  • B_TEXT is from HLASM (assembler program). Any sections are conCATenated together (Attributes =CAT)
  • C_WSA64 is the 64 bit WSA. Any data in these sections have been MeRGed (see the “total” variable above) (Attributes = MRG)
  • C_OPTIONS contains the list of C options used at compile time. The loader ignores this section (NOLOAD), but it is available for advanced programs such as debuggers to extract this information from the load module.

To introduce even more complexity. You can have class segments. These are an advanced topic where you want groups of classes to be independently loaded. Most people use the default of 1 segment.

Layout of the load module

Class layout

You can see the layout of the classes in the segment.

  • Class B_TEXT starts at offset 0 and is length 4F4.
  • Class C_CODE64 is offset 4F8 (4F4 rounded up to the nearest doubleword) and of length 1A38.
CLASS  B_TEXT            LENGTH =      4F4  ATTRIBUTES = CAT,   LOAD, RMODE=ANY 
                         OFFSET =        0 IN SEGMENT 001     ALIGN = DBLWORD 
  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -  -
CLASS  C_CODE64          LENGTH =     1A38  ATTRIBUTES = CAT,   LOAD, RMODE= 64 
                         OFFSET =      4F8 IN SEGMENT 001     ALIGN = DBLWORD ff

Within each class

CLASS  C_CODE64          LENGTH =     1A38  ATTRIBUTES = CAT,   LOAD, RMODE= 64 
                         OFFSET =      4F8 IN SEGMENT 001     ALIGN = DBLWORD 
--------------- 
                                                                                                  
 SECTION    CLASS                                      ------- SOURCE -------- 
  OFFSET   OFFSET  NAME                TYPE    LENGTH  DDNAME   SEQ  MEMBER 
                                                                                                  
                0  $PRIV000010        CSECT      1A38  /0000001  01 
       0        0     $PRIV000011        LABEL 
      B8       B8     PyInit_zconsole    LABEL 
     8B8      8B8     or_bit             LABEL 
     C30      C30     cthread            LABEL 
    1158     1158     cleanup            LABEL 
    11B8     11B8     printHex           LABEL 
  • The label $PRIV000010 CSECT is generated because I did not have a #pragma CSECT(CODE,”….”) statement in my source. If you use #pragma CSECT(STATIC,”….”) you get the name in the CLASS C_WSA64 section (see the following section)
  • The C function “or_bit” is at offset B8 in the class C_CODE64.

The static area

For the example below, the C module had #pragma CSECT(STATIC,”SZCONSOLE”)

CLASS  C_WSA64           LENGTH =      6B8  ATTRIBUTES = MRG, DEFER , RMODE= 64 
                         OFFSET =        0 IN SEGMENT 002     ALIGN = QDWORD 
--------------- 
                                                                                     
            CLASS 
           OFFSET  NAME                TYPE    LENGTH   SECTION 
                0  $PRIV000012      PART            10 
               10  SZCONSOLE        PART           5A0  ZCONSOLE 
              5B0  ascii_tab        PART           100  ascii_tab 
              6B0  gstate           PART             4  gstate 

There are two global static variables, common to all routines, ascii_tab, and gstate. They each have an entry defined in the class.

All of the static variables internal to routines are in the SZCONSOLE section. They do not have a explicit name because they are internal.