Easy question – hard answer, how to I convert a hex string to hex byte string in C?

I have a program which takes as input a hex string, and this needed to be converted to an internal format, specifically a DER encoded format ( also known as a TLV, Tag, Length, Value);

This took me a good couple of hours to get right, and I thought the solution would be worth passing on.

The problem is: I have a C program and I pass in a parameter -serial abcd123456. I want to create a hex string 0x02llabcd123456 where ll is 5 – the length of the data.

Read the parameter

for (iArg = 1;iArg< argc; iArg ++   ) 
{ 
   if (strcmp(argv[iArg],"-serial") == 0 
      && iArg +1 < argc) // we have a value 
   { 
      iArg ++; 
      char * pData = argv[iArg]; 
      int iLen = strlen(pDataz); 
      if ( iLen > 16 ) 
      { 
         printf("Serial is too long(%d) %s.\n",iLen,pData); 
         return 8; 
      } 
    ... process it.
}

The

if (strcmp(argv[iArg],"-serial") == 0 
      && iArg +1 < argc) // we have a value 

checks to see if -serial was specified, and there is a parameter following. It handles the case of passing “-serial” and no following parameter.

Convert it from hex string to internal hex

I looked at various ways of converting the character string to hex, and decided the internal C run time sscanf function was best. This is the opposite of printf. It takes a formatting string and converts from printable to internal format.

For example

sscanf(pData,”%ix”,&i);

Would processes the characters in the data pointed to by pData and covert them, treating hen as hex data, to an integer &i. The processing continues until a non valid hex character is met, or the integer value is full.

If the parameter was –serial AC, the output value would be 0x000000AC.

I initially tried this, but then I had to go along and ignore any leading zeros.

You can use

sscanf(pData,”%6hh”,&bs[0]);

To read up to 6 characters into the byte string bs. If the parameter was –serial AC, the output value would be 0xAC….

This is almost what I want – I want a left justified byte string. But I have a variable length, and cannot pass a length as part of the string.

I managed this using a combindation of sprintf and sscanf.

The final-ish solution

 
int len = strlen(pData); // get the length of passed value
char sscan[20];   // used for sscanf string 
// we need to covert an ebcdic string to hex, so 
//  "1" needs length of 1, 
//  "12" needs length of 1 
//  "123" needs length of 2 etc 
int lHex = (len + 1) /2; 
                                                       
// convert to %4ddx 
// create a string for the sscan with the length 
// as it needs a hard coded length 
sprintf(&sscan[0],"%%%dhhx\0", len);
// if len = 4 this creates "%4hhx"
char tempOutput[16];
// Now use it 
sscanf(pData,&sscan[0],&tempOutput[0]); 

and I have &tempOutput containing my data – of length lHex.

This worked until it didn’t work

This worked fine until a couple of hours later. If the hex value was 7F… or smaller it worked. If it was 80… or larger it did not work.

This is because of the way the DER format handles signed numbers.

The value 0x02036789ab says

  • 02 this is an integer field
  • 03 of length 03
  • with value 6789ab.

The value 0x0203Abcdef says

  • 02 this is an integer field
  • 03 of length 03
  • with negative value Abcdef – negative because the top bit of the number is a 1.

Special casing for negative numbers

I had to allows for this negative number effect.

For negative numbers, the output needs to be 0x020400abcdef which says

  • 02 this is an integer field
  • 04 of length 04
  • with value 00abcdef – positive because the top bit is zero.

pBuffer points to the output byte field.

 if (tempOutput[0] < 0x80) 
 { 
   memcpy(pBuffer+1,&temp[0],lHex); // move the data
   *pBuffer = lHex; // char value of the length 
 } 
 else // we need to insert extra so we do not get -ve 
 { 
   *(pBuffer+1) = 0x00; // insert extra null 
   memcpy(pBuffer+2,&temp[0],lHex); // and the rest 
   *pBuffer = lHex +1 ; // char value of the length 
 } 

The solution is easy with hindsight.

Improving application performance – why, how ?

I’m working on a presentation on performance, for some university students, and I thought it would be worth blogging some of the content.

I had presented on what it was like working in industry, compared to working in a university environment. I explained what it is like working in a financial institutions; where you have 10,000 transactions a second, transactions response time is measured in 10s of milliseconds, and if you are down for a day you are out of business. After this they asked how you tune the applications and systems at this level of work.

Do you need to do performance tuning?

Like many questions about performance the answer is it depends….. it comes down to cost benefit analysis. How much CPU (or money) will you save if you do analysis and tuning. You could work for a month and save a couple of hundred pounds. You could work for a day and find CPU savings which means you do not need to upgrade your systems, and so save lots of money.

It is not usually worth doing performance analysis on programs which run infrequently, or are of short duration.

Obvious statements

When I joined the performance team, the previous person in the role had left a month before, and the hand over documentation was very limited. After a week or so making tentative steps into understanding work, I came to the realise the following (obvious once you think about it) statements

  • A piece of work is either using CPU or is waiting.
  • To reduce the time a piece of work takes you can either reduce the CPU used, or reduce the waiting time.
  • To reduce the CPU you need to reduce the CPU used.
  • The best I/O is no I/O
  • Caching of expensive operations can save you a lot.

Scenario

In the description below I’ll cover the moderately a simple case, and also the case where there are concurrent threads accessing data.

Concurrent activity

When you have more than one thread in your application you will need to worry about data consistency. There are locks and latches

  • Locks tend to be “long running” – from milliseconds to seconds. For example you lock a database record while updating it
  • Latches tend to be held across a block of code, for example manipulation of lists and updating pointers.

Storing data in memory

There are different ways of storing data in memory, from arrays, hash tables to binary trees. Some are easy to use, some have good performance.

Consider having a list of 10,000 names, which you have to maintain.

Array

An array is a contiguous block of memory with elements of the same size. To locate an element you calculate the offset “number of element” * size of element.

If the list is not sorted, you have to iterate over the array to find the element of interest.

If the list is sorted, you can do a binary search, for example if the array has 1000 elements, first check element 500, and see if the value is higher or lower, then select element 250 etc.

An array is easy to use, but the size is inflexible; to change the size of the array you have to allocate a new array, copy old to new, release old.

Single Linked list

This is a chain of elements, where each element points to the next, the there is a pointer to the start of the chain, and something to say end of chain ( often “next” is 0).

This is flexible, in that you can easily add elements, but to find an element you have to search along the chain and so this is not suitable for long chains.

You cannot easily delete an element from the chain.

If you have A->B->D->Q. You can add a new element G, by setting G->Q, and D->G. If there are multiple threads you need to do this under a latch.

Doubly linked lists

This is like a single linked list, but you have a back chain as well. This allows you to easily delete an element. To add an element you have to update 4 pointers.

This is a flexible list where you can add and remove element, but you have to scan it sequentially to find the element of interest, and so is not suitable for long chains.

If there are multiple threads you need to do this under a latch.

Hash tables

Hash tables are a combination of array and linked lists.

You allocate an array of suitable size, for example 4096. You hash the key to a value between 0 and 4095 and use this as the index into the array. The value of the array is a linked list of elements with the same hash value, which you scan to find the element of interest.

You need a hash table size so there are a few (up to 10 to 50) elements in the linked list. The hash function needs to produce a wide spread of values. Having a hash function which returned one value, means you would have one long linked list.

Binary trees

Binary trees are an efficient way of storing data. If there are any updates, you need to latch the tree while updates are made, which may slow down multi threaded programs.

Each node of a tree has 4 parts

  • The value of this node such as “COLIN PAICE”
  • A pointer to a node for values less than “COLIN PAICE”
  • A pointer to a node for values greater than “COLIN PAICE”
  • A pointer to the data record for this node.

If the tree is balanced the number of steps from the start of the tree to the element of interest is approximately the same for all elements.

If you add lots of elements you can get an unbalanced tree where the tree looks like a flag pole – rather than an apple tree. In this case you need to rebalanced the tree.

You do not need to worry about the size of the tree because it will grow as more elements are added.

If you rebalance the tree, this will require a latch on the tree, and the rebalancing could be expensive.

There are C run time functions such as tsearch which walks the tree and if the element exists in the tree, it returns the node. If it did not exist in the tree, it adds to the free, and returns the value.

This is not trivial to code – (but is much easier than coding a tree yourself).

You need to latch the tree when using multiple threads, which can slow down your access.

Optimising your code

Take the scenario where you write an application which is executed a 1000 times a second.

int myfunc(char * name, int cost, int discount)
{
  printf(“Values passed to myfunc %s cost discount" i\n”,name,cost,discount);
  rc= dosomething()  
  rc = 0;
  printf(“exit from myfunc %i\n”,rc);
  return rc;
}

Note: This is based on a real example, I went to a customer to help with a performance problem, and found the top user was printf() – printing out logging information. They commented this code out in all of their functions and it went 5 times faster

You can make this go faster by having a flag you set to produce trace output, so

if (global.trace ) 
    printf(“Values passed to myfunc %s cost discount" i\n”,name,cost,discount);

You could to the same for the exit printf, but you may want to be more subtle, and use

if (global.traceNZonexit  && rc != 0)
   printf(“exit from myfunc %i\n”,rc);

This is useful when the return code is 0 most of the time. It is useful if someone reports problems with the application – and you can say “there is a message access-denied” at the time of your problem.

FILE * hFILE = 0;
for ( I = 0;i < 100;i ++)
    /* create a buffer with our data in it */
    lenData =  sprintf(buffer,”userid %s, parm %s\n”, getid(), inputparm); 
    error = ….()
    if (error > 0)
    {
     hFILE = fopen(“outputfile”,”a);
     fwrite(buffer,1,lenData,fFile)
     fclose(hFile)
    }
…
}

This can be improved

  • by moving the getid() out of the loop – it does not change within the loop
  • move the lenData = sprintf.. within the error loop.
  • change the error loop
{
  ... 
  if (error > 0)
  {
     if (hFile == 0 )
     {  
        hFILE = fopen(“outputfile”,”a”);
        pUserid = strdup(getuserid());  
     } 
     fwrite(buffer,1,lenData,fFile)     
  }
...
}
if (hFile > 0) 
   fclose(hFile);

You can take this further, and have the file handle passed in to the function, so it is only opened once, rather than every time the function is invoked.

main()
{
   struct {FILE * hFile
      …
    } threadBlock
   for(i=1,i<9999,i++)
   myprog(&threadBlock..}
   if (threadBlock →hFile != 0 )fclose(theadBlock → hFile);
   }
}
// subroutine
   myprog(threadblock * pt....){
...

  if (error > 0)
  {
     if (pt -> hFile == 0 )
     {  
        pt -> hFile= fopen(“outputfile”,”a”);       
     } 
     fwrite(buffer,1,lenData,pt -> hFile)
  }
   

Note: If this is a long running “production” system you may want to open the file as part of application startup to ensure the file can be opened etc, rather than find this out two days later.

Migrating from cc to xlc is like playing twister

I needed to compile a file in Unix System Services; I took an old make file, changed cc to xlc expecting it to compile and had lots of problems.

It feels like the documentation was well written in the days of the cc and c89 complier, and has a different beast inserted into it.

As started to write this blog post, I learned even more about compiling in Unix Services on z/OS!

Make file using cc

cparmsa= -Wc,"SSCOM,DEBUG,DEF(MVS),DEF(_OE_SOCKETS),UNDEF(_OPEN_DEFAULT),NOOE 
cparmsb= ,SO,SHOW,LIST(),XREF,ILP32,DLL,SKIPS(HIDE)" 
syslib= -I'/usr/include' -I'/usr/include/sys'  -I"//'TCPIP.SEZACMAC'" -I"//'TCPIP.SEZANMAC'" 
all: main 
parts =  tcps.o 
main: $(parts)
  cc -o tcps  $(parts) 
                                                                                                                            
%.o: %.c 
 cc  -c -o $@   $(syslib) $(cparmsa)$(cparmsb)    -V          $< 
 
clean: 
 rm  *.o 

The generated compile statement is

cc -c -o tcps.o -I’/usr/include’ -I’/usr/include/sys’ -I”//’TCPIP.SEZACMAC'” -I”//’TCPIP.SEZANMAC'” -Wc,”SSCOM,DEBUG,DEF(MVS),DEF(_OE_SOCKETS),UNDEF(_OPEN_DEFAULT),NOOE,SO, SHOW,LIST(),XREF,ILP32,DLL,SKIPS(HIDE)” -V tcps.c

Note the following

  • the -V option generates the listing. “-V produces all reports for the compiler, and binder, or prelinker, and directs them to stdout“. If you do not have -V you do not get a listing.
  • -Wc,list() says generate a list with a name like tcps.lst based on the file name being compiled. If you use list(x.lst) it does not produce any output! This is contrary to what the documentation says. (Possible bug on compiler when specifying nooe”
  • SHOW lists the included files
  • SKIPS(HIDE) omits the stuff which is not used – see below.

Make using xlc

I think the xlc compiler has bits from z/OS and bits from AIX (sensibly sharing code!). On AIX some parameters are passed using -q. You might use -qSHOWINC or -qNOSHOWINC instead of -Wc,SHOWINC

cparmsx= -Wc,"SO,SHOW,LIST(lst31),XREF,ILP32,DLL,SSCOM, 
cparmsy= DEBUG,DEF(MVS),DEF(_OE_SOCKETS),UNDEF(_OPEN_DEFAULT),NOOE" 
cparms3= -qshowinc -qso=./lst.yy  -qskips=hide -V 
syslib= -I'/usr/include' -I'/usr/include/sys'  -I"//'TCPIP.SEZACMAC'" -I"//'TCPIP.SEZANMAC'" 
all: main 
parts =  tcps.o 
main: $(parts) 
  cc -o tcps  $(parts) 
                                                                                                      
%.o: %.c 
 xlc -c -o $@   $(syslib) $(cparmsx)$(cparmsy) $(cparms3)     $< 
                                                                                                      
clean: 
 rm  *.o 

This generates a statement

xlc -c -o tcps.o -I’/usr/include’ -I’/usr/include/sys’ -I”//’TCPIP.SEZACMAC'” -I”//’TCPIP.SEZANMAC'” -Wc,”SO,SHOW,LIST(lst31),XREF, ILP32,DLL, SSCOM,DEBUG,DEF(MVS),DEF(_OE_SOCKETS), UNDEF(_OPEN_DEFAULT),NOOE” -qshowinc -qso=./lst.yy -qskips=hide tcps.c

Note the -q options. You need -qso=…. to get a listing.

Any -V option is ignored, and LIST(…) is not used.

Note: There is a buglet in the compiler, specifying nooe does not always produce a listing. The above xlc statement gets round this problem.

SKIPS(SHOW|HIDE)

The SKIPS(HIDE) also known as SKIPSRC shows you what is used, and suppresses text which is not used. I found this useful trying to find the combination of #define … to get the program to compile.

For example with SKIPS(SHOW)

170 |#if 0x42040000 >= 0X220A0000                               | 672     4      
171 |    #if defined (_NO_PROTO) &&  !defined(__cplusplus)      | 673     4      
172 |        #define __new210(ret,func,parms) ret func ()       | 674     4      
173 |    #else                                                  | 675     4      
174 |        #define __new210(ret,func,parms) ret func parms    | 676     4      
175 |    #endif                                                 | 677     4      
176 |#elif !defined(__cplusplus) && !defined(_NO_NEW_FUNC_CHECK)| 678     4      
177 |       #define __new210(ret,func,parms) \                  | 679     4      
178 |        extern struct __notSupportedBeforeV2R10__ func     | 680     4      
179 |   #else                                                   | 681     4      
180 |     #define __new210(ret,func,parms)                      | 682     4      
181 |#endif                                                     | 683     4      

With SKIPS(HIDE) the bold lines are not displayed,

170 |#if 0x42040000 >= 0X220A0000                              | 629     4 
171 |    #if defined (_NO_PROTO) &&  !defined(__cplusplus)     | 630     4 
172 |        #define __new210(ret,func,parms) ret func ()      | 631     4 
173 |     else                                                 | 632     4 
175 |    #endif                                                | 633     4 
176 |                                                          | 634     4 
179 |   #else                                                  | 635     4 
181 |#endif                                                    | 636     4 
182 |#endif                                                    | 637     4 

This shows

  • 170 The line number within the included file
  • 629 The line number within the file
  • 4 is the 4th included file. In the “I N C L U D E S” section it says 4 /usr/include/features.h
  • rows 174 is missing … this is the #else text which was not included
  • rows 177, 178,180 are omitted.

This makes is much easier to browse through the includes to find why you have duplicate definitions and other problems.

Compiling the TCP/IP samples on z/OS

Communications server (TCPIP) on z/OS provides some samples. I had problems getting these to compile, because the JCL in the documentation was a) wrong and b) about 20 years behind times.

Samples

There are some samples in TCPIP.SEZAINST

  • TCPS: a server which listens on a port
  • TCPC: a client which connects to a server using IP address and port
  • UDPC: C socket UDP client
  • UDPS: C socket UDP server
  • MTCCLNT: C socket Multitasking client
  • MTCSRVR: C socket Multitasking server
  • MTCCSUB: C socket subtask MTCCSUB

The JCL I used is

//COLCOMPI   JOB 1,MSGCLASS=H,COND=(4,LE) 
//S1          JCLLIB ORDER=CBC.SCCNPRC 
// SET LOADLIB=COLIN.LOAD 
// SET LIBPRFX=CEE 
// SET SOURCE=COLIN.C.SOURCE(TCPSORIG) 
//COMPILE  EXEC PROC=EDCCB, 
//       LIBPRFX=&LIBPRFX, 
//       CPARM='OPTFILE(DD:SYSOPTF),LSEARCH(/usr/include/)', 
// BPARM='SIZE=(900K,124K),RENT,LIST,RMODE=ANY,AMODE=31' 
//COMPILE.SYSLIB DD 
//               DD 
//               DD DISP=SHR,DSN=TCPIP.SEZACMAC 
//*              DD DISP=SHR,DSN=TCPIP.SEZANMAC  for IOCTL 
//COMPILE.SYSOPTF DD * 
DEF(_OE_SOCKETS) 
DEF(MVS) 
LIST,SOURCE 
TEST 
RENT ILP32        LO 
INFO(PAR,USE) 
NOMARGINS EXPMAC   SHOWINC XREF 
LANGLVL(EXTENDED) sscom dll 
DEBUG 
/* 
//COMPILE.SYSIN    DD  DISP=SHR,DSN=&SOURCE 
//BIND.SYSLMOD DD DISP=SHR,DSN=&LOADLIB. 
//BIND.SYSLIB  DD DISP=SHR,DSN=TCPIP.SEZARNT1 
//             DD DISP=SHR,DSN=&LIBPRFX..SCEELKED 
//* BIND.GSK     DD DISP=SHR,DSN=SYS1.SIEALNKE 
//* BIND.CSS    DD DISP=SHR,DSN=SYS1.CSSLIB 
//BIND.SYSIN DD * 
  NAME  TCPS(R) 
//START1   EXEC PGM=TCPS,REGION=0M, 
// PARM='4000          ' 
//STEPLIB  DD DISP=SHR,DSN=&LOADLIB 
//SYSERR   DD SYSOUT=*,DCB=(LRECL=200) 
//SYSOUT   DD SYSOUT=*,DCB=(LRECL=200) 
//SYSPRINT DD SYSOUT=*,DCB=(LRECL=200) 

Change the source

The samples do not compile with the above JCL. I needed to remove some includes

#include <manifest.h> 
// #include <bsdtypes.h> 
#include <socket.h> 
#include <in.h> 
// #include <netdb.h> 
#include <stdio.h> 

With the original sample I got compiler messages

ERROR CCN3334 CEE.SCEEH.SYS.H(TYPES):66 Identifier dev_t has already been defined on line 98 of “TCPIP.SEZACMAC(BSDTYPES)”.
ERROR CCN3334 CEE.SCEEH.SYS.H(TYPES):77 Identifier gid_t has already been defined on line 101 of “TCPIP.SEZACMAC(BSDTYPES)”.
ERROR CCN3334 CEE.SCEEH.SYS.H(TYPES):162 Identifier uid_t has already been defined on line 100 of “TCPIP.SEZACMAC(BSDTYPES)”.
ERROR CCN3334 CEE.SCEEH.H(NETDB):87 Identifier in_addr has already been defined on line 158 of “TCPIP.SEZACMAC(IN)”.


INFORMATIONAL CCN3409 TCPIP.SEZAINST(TCPS):133 The static variable “ibmcopyr” is defined but never referenced.

I tried many combinations of #define but could not get it to compile, unless I removed the #includes.

Compile problems I stumbled upon

Identifier dev_t has already been defined on line ...                                                     
Identifier gid_t has already been defined on line ...                                                     
Identifier uid_t has already been defined on line ....

This was caused by the wrong libraries in SYSLIB. I needed

  • CEE.SCEEH.H
  • CEE.SCEEH.SYS.H
  • TCPIP.SEZACMAC
  • TCPIP.SEZANMAC

The compile problems were caused by CEE.SCEEH.SYS.H being missing.

Execution problems

I had some strange execution problem when I tried to use AT-TLS within the program.

EDC5000I No error occurred. (errno2=0x05620062)

The errno2 reason from TSO BPXMTEXT 05620062 was

BPXFSOPN 04/27/18
JRNoFileNoCreatFlag: A service tried to open a nonexistent file without O_CREAT

Action: The open service request cannot be processed. Correct the name or the open flags and retry the operation.

Which seems very strange. I have a feeling that this field is not properly initialised and that this value can be ignored.

Running assembler control block chains in C

I needed to extract some information from z/OS in my C program. There is not a callable interface for the data, so I had to chain through z/OS control blocks.

Once you have an example to copy it is pretty easy – it just getting started which is the problem.

I have code (which starts with PSATOLD)

 #define PSA  540 
 char *TCB   = (char*)*(int*)(PSA); 
 char *TIO   = (char*)*(int*)(TCB + 12); 
 char *TIOE  = (char*)(TIO + 24) ; 
                                                            

  • At absolute address 540 (0x21C) is the address of the currently executing TCB.
  • (int *) (PSA) says treat this as an integer (4 byte) pointer.
  • * take the value of what this integer pointer points to. This is the address of the TCB
  • TCB + 12. Offset 12 (0x0c) in the TCB is the address of the Task I/O table (TCB IO)
  • (int *) says treat this as an integer ( 4 byte) pointer
  • * take the value of it to get to the TIOT
  • Offset 24 (0x18) the the location of the first TIO Entry in the control block

When I copied the code originally had char * (long * ) PSA. This worked fine on 31 bit programs but not on a 64 bit program as it uses 64 bit as an address – not 32 ! I had to use “int” to get it to work.

Another example, which prints the CPU TCB and SRB time used by each address space, is

// CVT Main anchor for many system wide control blocks
#define FLTCVT     16L
//  The first Address Space Control Block
#define CVTASCBH  564L
// the chain of ASCBs - next
#define ASCBFWDP    4L
//  offset to job info
#define ASCBEJST   64L
// the ASID of this address space
#define ASCBASID   36L


__int64 lTCB, lSRB; // could have used long long 
short ASID;  // 0x0000
char *plStor = (char*)FLTCVT;
char *plCVT  = (char*)*(int*)plStor;
char *plASCB = (char*)*(int*)(plCVT+CVTASCBH); // first ASCB
for( i=0; i<1000 & plASCB != NULL;
       i++, plASCB = (char*)*(int*)(plASCB+ASCBFWDP) )
{
  lTCB = *(__int64*)(plASCB+ASCBEJST) >> 12; // microseconds
  lSRB = *(__int64*)(plASCB+ASCBSRBT) >> 12; // microseconds
  ASID = *(short*)(plASCB+ASCBASID));
  printf("ASID=%4.4x TCB=%lld; SRB=%lld\n", ASID, lTCB, lSRB);
}

Creating a C external function for Python, an easier way to compile

I wrote about my first steps in creating a C extension in Python. Now I’ve got more experience, I’ve found an easier way of compiling the program and creating a load module. It is not the official way – but it works, and is easier to do!

The traditional way of building a package is to use the setup.py technique. I’ve found just compiling it works just as well (and is slighly faster). You still need the setup.py for building Python source.

I set up a cp4.sh file

name=zos
pythonSide='/usr/lpp/IBM/cyp/v3r8/pyz/lib/python3.8/config-3.8/libpython3.8.x'
export _C89_CCMODE=1
p1=" -DNDEBUG -O3 -qarch=10 -qlanglvl=extc99 -q64"
p2="-Wc,DLL -D_XOPEN_SOURCE_EXTENDED -D_POSIX_THREADS"
p2="-D_XOPEN_SOURCE_EXTENDED -D_POSIX_THREADS"
p3="-D_OPEN_SYS_FILE_EXT -qstrict "
p4="-Wa,asa,goff -qgonumber -qenum=int"
p5="-I//'COLIN.MQ930.SCSQC370' -I. -I/u/tmp/zpymqi/env/include"
p6="-I/usr/lpp/IBM/cyp/v3r8/pyz/include/python3.8"
p7="-Wc,ASM,EXPMAC,SHOWINC,ASMLIB(//'SYS1.MACLIB'),NOINFO "
p8="-Wc,LIST(c.lst),SOURCE,NOWARN64,FLAG(W),XREF,AGG -Wa,LIST,RENT"
/bin/xlc $p1 $p2 $p3 $p4 $p5 $p6 $p7 $p8 -c $name.c -o $name.o -qexportall -qagg -qascii
l1="-Wl,LIST=ALL,MAP,XREF -q64"
l1="-Wl,LIST=ALL,MAP,DLL,XREF -q64"
/bin/xlc $name.o $pythonSide -o $name.so $l1 1>a 2>b
oedit a
oedit b

This shell script creates a zos.so load module in the current directory.

You need to copy the output load module (zos.so) to a directory on the PythonPath environment variable.

What do the parameters mean?

Many of the parameters I blindly copied from the setup.py script.

  • name=zos
    • This parametrizes the script, for example $name.c $name.o $name.so
  • pythonSide=’/usr/lpp/IBM/cyp/v3r8/pyz/lib/python3.8/config-3.8/libpython3.8.x’
    • This is where the python side deck, for resolving links the to functions in the Python code
  • export _C89_CCMODE=1
    • This is needed to prevent the message “FSUM3008 Specify a file with the correct suffix (.c, .i, .s,.o, .x, .p, .I, or .a), or a corresponding data set name, instead of -o./zos.so.”
  • p1=” -DNDEBUG -O3 -qarch=10 -qlanglvl=extc99 -q64″
    • -O3 optimization level
    • -qarch=10 is the architectural level of the code to be produced.
    • –qlanglvl=extc99 says use the C extensions defined in level 99. (For example defining variables in the middle of a program, rather that only at the top)
    • -q64 says make this a 64 bit program
  • p2=”-D_XOPEN_SOURCE_EXTENDED -D_POSIX_THREADS”
    • The C #defines to preset
  • p3=”-D_OPEN_SYS_FILE_EXT -qstrict ”
    • -qstrict Used to prevent optimizations from re-ordering instructions that could introduce rounding errors.
  • p4=”-Wa,asa,goff -qgonumber -qenum=int”
    • -Wa,asa,goff options for any assembler compiles (not used)
    • -qgonumber include C program line numbers in any dumps etc
    • -qenum=int use integer variables for enums
  • p5=”-I//’COLIN.MQ930.SCSQC370′ -I. -I/u/tmp/zpymqi/env/include”
    • Where to find #includes:
    • the MQ libraries,
    • the current working directory
    • the header files for my component
  • p6=”-I/usr/lpp/IBM/cyp/v3r8/pyz/include/python3.8″
    • Where to find #includes
  • p7=”-Wc,ASM,EXPMAC,SHOWINC,ASMLIB(//’SYS1.MACLIB’),NOINFO ”
    • Support the use of __ASM().. to use inline assembler code.
    • Expand macros to show what is generated
    • List the data from #includes
    • If using__ASM__(…) where to find assembler copy files and macros.
    • Do not report infomation messages
  • p8=”-Wc,LIST(c.lst),SOURCE,NOWARN64,FLAG(W),XREF,AGG -Wa,LIST,RENT”
    • For C compiles, produce a listing in c.lst,
    • include the C source
    • do not warn about problems with 64 bit/31 bit
    • display the cross references (where used)
    • display information about structures
    • For Assembler programs generate a list, and make it reentrant
  • /bin/xlc $p1 $p2 $p3 $p4 $p5 $p6 $p7 $p8 -c $name.c -o $name.o -qexportall
    • Compile $name.c into $name.o ( so zos.c into zos.o) export all entry points for DLL processing
  • L1=”-Wl,LIST=ALL,MAP,DLL,XREF -q64″
    • bind pararameters -Wl, produce a report,
    • show the map of the module
    • show the cross reference
    • it is a 64 bit object
  • /bin/xlc $name.o $pythonSide -o $name.so $L1 1>a 2>b
    • take the zos.o, the Python side deck and bind them into the zos.so
    • pass the parameters defined in L1
    • output the cross reference to a and errors to b
  • oedit a
    • This will have the map, cross reference and other output from the bind
  • oedit b
    • This will have any error messages – it should be empty

Notes:

  • -qarch=10 is the default
  • the -Wa are for when compiling assembler source eg xxxx.s
  • –qlanglvl=extc99. EXTENDED may be better than extc99.
  • it needs the -qascii to work with Python.

When is an error not an err?

If you step off the golden path of trying to read a file – you can quickly end up in in trouble and the diagnostics do not help.

I had some simple code

FILE * hFile = fopen(...); 
recSize = fread(pBuffer ,1,bSize,hFile); 
if (recSize == 0)
{
  // into the bog!
 if (feof(hFile))printf("end of file\n");
 else if (ferror(hFile)) printf("ferror(hFile) occurred\n");
 else printf("Cannot occur condition\n");
 
}

When running a unit test of the error path of passing a bad file handle, I got the “cannot occur condition because the ferror() returned “OK – no problem “

The ferror() description is

General description: Tests for an error in reading from or writing to the specified stream. If an error occurs, the error indicator for the stream remains set until you close the stream, call rewind(), or call clearerr().
If a non-valid parameter is given to an I/O function, z/OS XL C/C++ does not turn the error flag on. This case differs from one where parameters are not valid in context with one another.

This gave me 0, so it was not able to detect my error. ( So what is the point of ferror()?)

If I looked at errno and used perror() I got

errno 113
EDC5113I Bad file descriptor. (errno2=0xC0220001)

You may think that I need to ignore ferror() and check errno != 0 instead. Good guess, but it may not be that simple.

The __errno2 (or errnojr – errno junior)) description is

General description: The __errno2() function can be used when diagnosing application problems. This function enables z/OS XL C/C++ application programs to access additional diagnostic information, errno2 (errnojr), associated with errno. The errno2 may be set by the z/OS XL C/C++ runtime library, z/OS UNIX callable services or other callable services. The errno2 is intended for diagnostic display purposes only and it is not a programming interface. The __errno2() function is not portable.
Note: Not all functions set errno2 when errno is set. In the cases where errno2 is not set, the __errno2() function may return a residual value. You may use the __err2ad() function to clear errno2 to reduce the possibility of a residual value being

If you are going to use __errno2 you should clear it using __err2ad() before invoking a function that may set it.

I could not find if errno is clean or if it may return a residual value, so to be sure to set it before every use of a C run time library function.

Having got your errno value what do you do with it?

There are #define constants in errono.h such as

#define EIO 122 /* Input/output error */

You case use if ( errno == EIO ) …

Like many products there is no mapping of 122 to “EIO”, but you can use strerror(errno) to map the errno to the error string like EDC5113I Bad file descriptor. (errno2=0xC0220001). This also provides the errno2 string value.

Using ASCII stuff in a C program

This is another topic which looks simple of the surface but has hidden depths. (Where “hidden depths” is a euphemism for “looks buggy”). It started off as one page, but by the time I had discovered the unexpected behaviours, it became 10 times the size. The decision tree to say if text will be printed in ASCII or EBCDIC was three levels deep – but I give a solution.

Why do you want to use ASCII stuff in a C program?

Java, Python and Node.js run on z/OS, and they use ASCII for the character strings, so if you are writing JNI interfaces for Java, or C external functions for Python you are likely to use this.

Topics covered in the log post

The high level view

You can define ASCII character strings in C using:

char * pA = 0x41424c30 ;  // ABC0
// or 
#pragma convert(819)
char * pASCII = "CODEPAGE 819 data" ;
#pragma convert(pop)

And EBCDIC strings (for example when using ASCII compile option)

#pragma convert("IBM-1047") 
char * pEBCDIC = "CODEPAGE 1047  data" ; 
#pragma convert(pop) 

You can define that the whole program is “ASCII” by using the -Wc,ASCII or -qascii options at compile time. This also gives you printf and other functions.

You can use requests like fopen(“myfile”…) and the code successfully handles the file name in ASCII. Under the covers I expect the code does __a2e_l() to covert from ASCII to EBCDIC, then uses the EBCDIC version of fopen().

The executable code is essentially the same – the character strings are in ASCII, and the “Flags” data is different (to reflect the different compile options). At bind time different stubs are included.

To get my program compiled with the -qascii option, to successfully print to the OMVS terminal, piped using |, or redirect using 1>, I had to using the C run time function fcntl() to set the code page and tag of STDOUT and STDERR. See here below.

You need to set both STDERR and STDOUT – as perror() prints to STDERR, and you may need this if you have errors in the C run time functions.

Some of the hidden depths!

I had a simple “Hello World” program which I compiled in USS with the -qascii option. Depending on how I ran it, I got “Hello World” or “çÁ%%?-ï?Ê%À” (Hello World in ASCII).

  • straight “./prog”. The output in ASCII
  • pipe “./prog | cat”. If I used the environment variable _TAG_REDIR_OUT=”TXT” the output was in EBCDIC – otherwise it came out in ASCII.
  • redirect to a file “./prog 1> aaa”. Depending on the environment variable _BPXK_AUTOCVT=”ON”, and if the file existed or nor, and if the files existed and was tagged. The output could be in EBCDIC or ASCII!

So all in all – it looks a bit of a mess.

Background to outputting data

Initially I found printing of ASCII data was easy; then I found what I had written sometimes did not work, and after a week or so I had a clearer idea of what is involved, then I found a few areas which were even more complex. You may want to read this section once, read the rest of the blog post, then come back to this section.

A file can have

  • a tag – this is typically “this file is binary|text|unknown”
  • the code page of the data – or not specified.
  • you can use ls -T file.name to display the tag and code page of a file.

Knowing that information…

  • If a file has a tag=text and an ASCII code page,
    • if the _BPXK_AUTOCVT=”ON” environment flag is set it will display in EBCDIC (eg cat…)
    • else (_BPXK_AUTOCVT=”OFF”) it will display unreadable ASCII (eg cat…)
  • If a file has a tag=binary then converting to a different code page makes no sense. For example a .jpeg or load module. Converting a load module would change the instructions Store Half Word(x40) to a Load Positive (x20).
  • If a file is not tagged – it becomes a very fuzzy area. The output is dependant on the program running. An ASCII program would output as ASCII, and an EBCDIC would output as EBCDIC.
  • ISPF edit is smart enough to detect the file is ASCII – and enable ISPF edit command “Source ASCII”.

Other strands to the complexity

  • Terminal type
    • If you are logged on using rlogin, you can use chcp to change the tag or code page of your terminal.
    • If you are logged in through TSO – using OMVS, you cannot use chcp. I can’t find a command to set the tag or code page, but you can set it programmatically.
  • Redirection
    • You can print to the terminal or redirect the output to a file for example ./runHello 1>output.file .
    • The file can be created with the appropriate tag and code page.
    • You can use the environment variables _TAG_REDIR_OUT=TXT|BIN and _TAG_REDIR_ERR=TXT|BIN to specify what the redirected output will be.
    • If you use _TAG_REDIR_OUT=TXT, the output is in EBCDIC.
      • you can use ./prog | cat to take the output of prog and pipe it through cat to display the lines in EBCDIC.
      • you can use ./prog |grep ale etc. For me this displayed the EBCDIC line with Locale in it.
    • You can have the situation where if you print to the terminal it is in ASCII, if you redirect, it looks like EBCDIC!
  • The tag and ccsid are set on the first write to the file. If you write some ASCII data, then set the tag and code page, the results are unpredictable. You should set the tag and ccsid before you try to print anything.

What is printed?

A rough set of rules to determine what gets printed are

  • If the output file has a non zero code page, and _BPXK_AUTOCVT=”ON” is set, the output the data converted from the code page to EBCDIC.
  • If the output file has a zero code page, then use the code page from the program. A program complied with ASCII will have an ASCII code page, else it will default to 1047.

A program can set the tag, and code page using the fcntl() function.

Once the tag and code page for STDOUT and STDERR have been set, they remain set until reset. This leads to the confusing sequence of commands and output

  • run my ascii compiled “hello world” program – the data comes out as ASCII.
  • run Python3 –version which sets the STDOUT code page.
  • rerun my ascii compiled “hello world” program – the data comes out as EBCDIC! This is because the STDOUT code page was set by Python (and not reset).

I could not find a field to tell me which code page should be used for STDOUT and STDERR, so I suggest using 1047 unless you know any better.

Using fcntl() to display and set the code page of the output STDOUT and STDERR

To display the information for an open file you can use the fcntl() function. The stat() function also provide information on the open file. They may not be entirely consistent, when using pipe or redirect.

#include <fcntl.h> 
...
struct f_cnvrt f; 
f.cvtcmd = QUERYCVT; 
f.pccsid = 0; 
f.fccsid = 0; 
int action = F_CONTROL_CVT; 
rc =fcntl( STDOUT_FILENO,action, &f ); 
...
switch(f.cvtcmd) 
{ 
  case QUERYCVT : printf("QUERYCVT - Query"); break; 
  case SETCVTOFF: printf("SETCVTOFF -Set off"); break; 
  case SETCVTON : printf("SETCVTON -Set on unconditionally"); break; 
  case SETAUTOCVTON : printf("SETAUTOCVTON - Set on conditionally"); break; 
  default:  printf("cvtcmd %d",f.cvtcmd); 
} 
printf(" pccsid=%hd fccsid=%hd\n",f.pccsid,f.fccsid); 

For my program compiled in EBCDIC the output was

SETCVTOFF -Set off pccsid=1047 fccsid=0

This shows the program was compiled as EBCDIC (1047), and the STDOUT ccsid was not set.

When the ascii compiled version was used, the output, in ASCII was

SETCVTON -Set on unconditionally pccsid=819 fccsid=819

This says the program was ASCII (819) and the STDOUT code page was ASCII (819).

When I reran the ebcdic complied version, the output was

SETCVTOFF -Set off pccsid=1047 fccsid=0

Setting the code page

printf("before text\n");
f.cvtcmd = SETCVTON ; 
f.fccsid = 0x0417  ;  // code page 1047
f.pccsid = 0x0000  ;  // program 333 = ascii, 0 take default 
rc =fcntl( STDOUT_FILENO,action, &f ); 
if ( rc != 0) perror("fcntl"); 
printf("After fcntl\n"); 

The first time this ran as an ASCII program, the “before text” was not readable, but the “After fcntl” was readable. The second time it was readable. For example, the second time:

SETCVTON -Set on unconditionally pccsid=819 fccsid=1047
before text
After fcntl

You may want to put some logic in your program

  • use fcntl and f.cvtcmd = QUERYCVT for STDOUT and STDERR
  • if fccsid == 0 then set it to 1047, using fcntl and f.cvtcmd = SETCVTON

Set the file tag

This is needed to set the file tag for use when output is piped or redirected.

void settag(int fileNumber) 
{ 
  int rc; 
  int action; 
  struct file_tag ft; 
  memset(&ft,sizeof(ft),0); 
  ft.ft_ccsid = 0x0417; 
  ft.ft_txtflag = 1; 
  ft.ft_deferred = 0;  // if on then use the ccsid from the program! 
  action = F_SETTAG; 
  rc =fcntl( fileNumber,action, &ft); 
  if ( rc < 0) 
  { 
     perror("fctl f_settag"); 
     printf("F_SETTAG %i %d\n",rc,errno); 
  } 
}

What gets printed (without the fcntl() code)?

It depends on

  • if you are printing to the terminal – or redirecting the ouptut.
  • if the STDOUT fccsid is set to 1047
  • If _BPXK_AUTOCVT=”ON”

Single program case normal program

If you have a “normal” C program and some EBCDIC character data, when you print it, you get output like “Hello World”.

Single program case ASCII program or data – fccsid = 0

If you have a C program, compiled with the ASCII option, and print some ASCII data, when you print it you get output like ø/ËËÁÀ-À/È/-

Single program case ASCII program or data – fccsid = 1047

If you have a C program, compiled with the ASCII option, and print some ASCII data, when you print it you get output output like “Hello World”.

Program with both ASCII and EBCDIC data – compiled with ASCII

With a program with

#pragma convert("IBM-1047") 
 char * pEBCDIC ="CODEPAGE 1047  data"  ; 
#pragma convert(pop) 
printf("EBCDIC:%s\n",pEBCDIC); 

#pragma convert(819) 
char * pASCII = "CODEPAGE 819 data" ; 
#pragma convert(pop) 
printf("ASCII:%s\n",pASCII); 

When compiled without ascii it produces

EBCDIC:CODEPAGE 1047 data
ASCII:ä|àá& åá—–À/È/

When compiled with ascii it produces

EBCDIC:ÃÖÄÅ×ÁÇÅ@ñðô÷@@–£-
ASCII:CODEPAGE 819 data

Mixing programs compiled with/without the ASCII options

If you have a main program and function which have been compiled the same way – either both with ASCII compile option, or both without, and pass a string to the function to print, the output will be readable “Hello World”.

If the two programs have been compiled one with, and one without the ASCII options, an ASCII program will try to print the EBCDIC data, and an EBCDIC program will try to print the ASCII data.

If you know you are getting the “wrong” code page data, you can use the C run time functions __e2a_l or __a2e_l to convert the data.

Decoding/displaying the ASCII output

If you pipe the output to a file, for example ./cp2.so 1>a you can display the contents of the file converted from ASCII using

  • the “source ascii” command in ISPF edit, or
  • oedit . , and use the “ea” (edit ascii) or “/” line command

The “source ascii” works with SDSF output, with the SE line command.

Several times I had a file with EBCDIC data but the file had been tagged as ASCII. I used chtag -tc IBM-1047 file.name to reset it.

Under the covers

At compile time

There is a predefined macro __CHARSET_LIB which is is defined to 1 when the ASCII compiler option is in effect and to 0 when the ASCII compile option is not used.

There is C macro logic like defines which function to use

if (__CHARSET_LIB == 1) 
#pragma map (printf, "@@A00118")
else 
#pragma map (printf, "printf")

A program compiled with the ASCII option will use a different C run time module (@@A00118), compared with the non ASCII mode, which will use printf.

The #pragma map “renames” the function as input to the binder.

This is known as bimodal and is described in the documentation.

I want my program to be bimodal.

Bimodal is where your program can detect if it is running as ASCII or EBCDIC and make the appropriate decision. This can happen in you have code which is #included into the end user’s program.

This is documented here.

You can use

#define _AE_BIMODAL 1 
if (__isASCII()) // detect the run time
   __printf_a(format, string);  // use the ascii version
else
   __printf_e(format, string); // use the ebcdic version

If you cannot open a data set, amrc may help

I had a C program which opened a dataset and read from it. I enhanced it, by adding comments and other stuff, and after lunch it failed to open

I undid all my changes, and it still it failed to open! Weird.

I got message

  • EDC5061I
    • An error occurred when attempting to define a file to the system. (errno2=0xC00B0403)
    • Programmer response : Check the __amrc structure for more information. See z/OS XL C/C++ Programming Guide for more information on the __amrc structure.
  • C00B0403:
    • The filename argument passed to fopen() or freopen() specified dsname syntax. Allocation of a ddname for the dsname was attempted, but failed.
    • Programmer response: Failure information returned from SVC 99 was recorded in the AMRC structure. Use the information there to determine the cause of the failure.

This feels like the unhelpful messages Ive seen. “An error has occurred – we know what the error is – but we wont tell you” type messages.

To find the reason I had to add some code to my program.

 file =  fopen(fileName, mode     ); 
 __amrc_type save_amrc; 
 memcpy(&save_amrc,__amrc,sizeof(__amrc)); 
 printf("AMRC __svc99_info %hd error %hd\n",
         save_amrc.__code.__alloc.__svc99_info, 
         save_amrc.__code.__alloc.__svc99_error); 
                                                                                    

and it printed

AMRC __svc99_info 0 528

The DYNALLOC (dynamic allocation) which uses SVC 99 to allocate data sets, has a section Interpreting error reason codes from DYNALLOC. The meaning of 528 is Requested data set unavailable. The data set is allocated to another job and its usage attribute conflicts with this request.

And true enough, in one of the ISPF sessions in one of my TSO userid I was editing the file.

It looks like

printf(“__errno2 = %08x\n”, __errno2());

Would print the same information.

Thoughts

It appears that you cannot tell fopen to open it for read even if it has a write lock on it.

For DYNALLOC, if the request worked, these fields may have garbage in them – as I got undocumented values.

It would be nice if the developer of the fopen code produced messages like

EDC5061I: An error occurred when attempting to define a file to the system. (errno2=0xC00B0403) (AMRC=0x00000210)

Then it would be more obvious!

There are many ways to fail to read a file in a C program.

The 10 minute task to read a file from disk using a C program took a week.

There are several options you can specify on the C fopen() function and it was hard to find the one that worked. I basically tried all combinations. Once I got it working, I tried on other files, and these failed, so my little task turned into a big piece of research.

Depending on the fopen options and the use of fgets or fread I got different data when getting from a dataset with two records in it!

  • AAAA – what I wanted.
  • AAAAx’15’ – with a hex 15 on the end.
  • AA
  • AAAAx’15’BBBBx’15’- both the records, with each record terminated in a line-end x’15’.
  • AAAABBBB – both records with no line ends.
  • x’00140000′ x’00080000’AAAAx’000080000’BBBB.
    • This has a Block Descriptor Word (x’00140000′) saying the length of the block is x14.
    • There is a Record Descriptor Word (RDW) (X’00080000′) of length x’0008′ including the 4 bytes for the RDW.
    • There is the data for the first record AAAA,
    • A second RDW
    • The second data
    • The size of this record is 4 for the BWD, 4 for the first RDW, 4 for the AAAA, 4 for the second RDW, and 4 for the data = 20 bytes = x14.
  • Nothing!

Sections in this blog post

There are different sorts of data

  • z/OS uses data sets which have records. Records can be blocked (for better disk utilisation) It is more efficient to write one block containing 40 * 80 bytes records than write 40 blocks each of 80 bytes. You can treat the data as one big block (of size 3200 bytes) – or as 40 blocks of 80 (or 2 blocks of 1600 …) . With good blocking you can get 10 times the amount of data onto a disk. (If you imagine each record on disk needs a 650 byte header you will see why blocking is good).
  • You can have “text” files, which contain printable characters. They often use the new line character (x’15’) to indicate end of line.
  • You can have binary files which should be treated as a blob. In these characters line x’15’ do not represent new line. For example it might just be part of a sequence x’13141516′.
  • Unix (and so OMVS) uses byte-addressable storage.
    • “Normal files” have data in EBCDIC
    • You can use enhanced ASCII. Where you can have files with data in ASCII (or other code page). The file metadata contains the type of data (binary or text) and the code page of the text data.
    • With environment _BPXK_AUTOCVT you can have automatic conversion which converts a text file in ASCII to EBCDIC when it is read.

From this you can see that you need to use the correct way of accessing the data(records or byte addressable), and of using the data(text or binary).

Different ways of getting the data

You can use

  • the fread function which reads the data, according to the file open options (binary|text, blocked)
  • the fgets function which returns text data (often terminated by a line end character).

Introduction to fopen()

The fopen() function takes a file name and information on how the file should be opened and returns a handle. The handle can be used in an fread() function or to provide information about the file. The C runtime reference gives the syntax of fopen(), and there is a lot of information in the C Programming guide:

There is a lot of good information – but it didn’t help me open a file and read the contents.

The file name

The name can be, for example:

  • “DD:SYSIN” – a reference to a data set via JCL
  • “//’USER.PROCLIB(MYPROC)'” – a direct reference to a dataset
  • “/etc/profile” – an OMVS file.

If you are using the Unix shell you need to use “//’datasetname'” with both sets of quotes.

The options

This options string has one or more parameters.

The first parameter defines the operation, read, write, read+write etc. Any other parameters are in the format keyword=format.

The read operation can be

  • “rb” to read a binary file. In a binary file, the data is treated as a blob.
  • “rt” to read a text file. A text file can have auto conversion see FILETAG C run time option.
  • “r”

The type can be

  • type=record – read a record of data from the disk, and give the record to the application
  • type=blocked. It is more efficient in disk storage terms, to build up a record of 10 * 80 byte records into one 800 byte record (or even 3200 byte records). When a blocked record is read, the read request returns the first N bytes, then the next N bytes. When the end of the block is reached it will get retrieve the next block from disk.
  • not specified, which implies byte access, rather than record access, and use of fgets function.

An example fopen

FILE * f2 fopen(“//’COLIN.VB'”,”rt type=blocked”) ;

How much data is read at a time?

Depending on the type of data, and the open options a read request may return data

  • up to the end of the record
  • up to and including the new-line character
  • up to the size of the buffer you give it.

If you do not give it a big enough buffer you may have to issue several reads to get whole logical record.

The system may also “compress” data, for example remove trailing blanks from the end of a line. If you were expecting an 80 byte record – you might only get 20 bytes – so you may need to check this, and fill in the missing blanks if needed.

Reading the data

You can use the fread() function

The fread() parameters are:

  • the address of a buffer to hold the data
  • the unit of the buffer block size
  • the number of buffer blocks
  • the file handle.

It returns the number of completed buffer blocks used.

It is usually used

#define lBuffer 1024

char buffer[lBuffer];
size_t len = fread(buffer, 1 ,lBuffer ,fHandle );

This says the unit of the buffer is 1 character, and there are 1024 of them.

If the record returned was 80 bytes long, then len would be 80.

If it was written as

size_t len = fread(buffer, lBuffer, 1 ,fHandle );

This says the unit of buffer is 1024 – and there is one of them. The returned “length” is 0, as there were no full 1024 size buffers used.

It appears that a returned length of 0 means either end of file or a file error. You can tell if the read is trying to go past the end of the file (feof()) or was due to a file error (ferror()).

if (feof(hFile))…
else if (ferror(hFile))
{ int myerror = errno;
perror(“Colins error”);

}

You can use the fgets function.

The fgets() parameters are:

  • the address of a buffer to hold the data
  • the size of the buffer
  • the file handle.

This returns a null terminated string in the buffer. You can use strlen(buffer) to get the length of it. Any empty string would have length 0.

fgets()returns a pointer to the string, or NULL if there is an error. If there is an error you can use feof() or ferror() as above.

Results from reading data sets and files

I did many tests with different options, and configurations, and the results are displayed below.


The following are using in the tables:

  • Run.
    • job. The program was run in JCL using BPXBATSL or via PGM=…
    • unix shell – the program was run in OMVS shell.
  • BPXAUTO. This is used in OMVS shell. The environment variable _BPXK_AUTOCVT can have
    • “ON” – do automatic conversion from ASCII to EBCDIC where application
    • “OFF” do not do automatic conversion from ASCII to EBCDIC
  • open. The options used.
    • r readrb read binaryrt read text
    • “t=r” is shorthand for type=record
  • read. The C function used to read the data.
    • fread – this is the common read function. It can read text files and binary files
    • fgets – this is used to get text from a file. The records usually have the new-line character at the end of the data.
    • aread is a C program which uses fread – but the program is compiled with the ASCII option.
  • data
    • E is the EBCDIC data I was expecting. For example reading from a FB dataset member gave me an 80 byte record with the data in it.
    • E15 is EBCDIC data, but trailing blanks have been removed. The last character is an EBDCIC line end (x15).
    • A0A. The data is returned in ASCII, and has a trailing line end character (x0A). You can convert this data from ASCII to EBCDIC using the C run time function __a2e_l(buffer,len) .
    • Abuffer – data up to the size of the buffer ( or size -1 for fgets) the data in ASCII.
    • Ebuffer – data up to the size of the buffer ( or size -1 for fgets) the data in EBCDIC. This data may have newlines within the buffer

To read a dataset

This worked with a data set and inline data within a JOB.

RunBPXAUTOopenreaddata
JobNArb,t=rfreadE
r|rtfreadEbuffer
rfgetsE15
rtfgetsE15
Unix shellAnyrb,t=rfreadE
r|rtfreadEbuffer
rfgetsE15
rtfgetsE15

Read a normal(EBCDIC) OMVS file

RunBPXAUTOopenreaddata
JobOFFrb,t=rfreadE
offrtfgetsE15
off*freadEbuffer
Unix shelloffrfgetsE15
offrtfgetsE15
offrbfgetsE15
off*freadE15
onrb,t=rfreadE
onrbfgetsE15

Read an ASCII file in OMVS

RunBPXAUTOopenreaddata
JoboffragetsA0A
offrbagetsA0A
offrtagetsA0A
Unix shellonrfgetsE15
onrbfgetsE15
onrtfgetsE15
on*freadbuffer
offragetsA0A
offrbagetsA0A
offrtagetsA0A
off*freadAbuffer

To read a binary file

To read a data set

RunBPXAUTOopenreaddata
JoboffrbfreadE
Unix shellonrbfreadE

Read a binary normal(EBCDIC) OMVS file

RunBPXAUTOopenreaddata
JoboffrbfreadE
Unix shellonrbfreadE
offrbfreadE

Read a binary ASCII file in OMVS

If use list the attributes of a binary file, for example “ls -T ecrsa.p12” it gives

b binary T=off ecrsa1024.p12

Which shows it is a binary file

RunBPXAUTOopenreaddata
Joboffrb,t=rfreadE
Unix shellonrb,=rfreadE
offrb,t=rfreadE

Reading data sets from a shell script

If you try to use fopen(“DD:xxxx”…) from a shell script you will get

FOPEN: EDC5129I No such file or directory. (errno2=0x05620062)

If you use fopen(“//’COLIN.VB'”…) and specify a fully qualified dataset name if will work.

fopen(“//VB”..) will put the RACF userid in front off they name. For example attempt to open “//’COLIN.VB.'”

How big a buffer do I need?

The the buffer is not large enough to get all of the data in the record, the buffer will be filled up to the size of the data. For example using fgets and a 20 byte buffer, 19 characters were returned. The 20th character was a null. The “new-line” character was present only when the end of the record was got.

How can I tell the size of the buffer I need?

There is data available which helps – but does not give the whole picture.

With the C fldata() — Retrieve file information function you can get information from the file handle such as

  • the name of the dataset (as provided at open time)
  • record format – fixed, variable
  • dataset organisation (dsorg) – Paritioned, Temporary, HFS
  • open mode – text, binary, record
  • device type – disk, printer, hfs, terminal
  • blksize
  • maximum record length

With fstat(), fstat64() — Get status information from a file handle and lstat(), lstat64() — Get status of file name or symbolic link you can get information about an OMVS file name (or file handle). This has the information available in the OMVS “ls” command. For example

  • file serial number
  • owner userid
  • group userid
  • size of the file
  • time last access
  • time last modified
  • time last file status changed
  • file tag
    • ccsid, value or 0x000 for untagged, or 0xffff for binary
    • pure text flag.

Example output

For data sets and files (along the top of the table)

  • VB a sequential data set Variable Blocked
  • FB a member of user.proclib which is Fixed Block
  • SYSIN inline data in a job
  • Loadlib a library containing load modules (binary file)
  • OMVS file for example zos.c
  • ASCII for example z.py which has been tagged as ASCII

Other values

  • V variable format data
  • F fixed format data
  • Blk it has blocked records
  • B “Indicates whether it was allocated with blocked records”. I do not know the difference between Blk and V
  • U Undefined format
  • PS It is a Physical Sequential
  • PO It is partitioned – it as members
  • PDSE it is a PDSE
  • HFS it is a file from OMVS
VBFBSYSINloadibOMVS fileASCII
fldatarecfmV Blk BF Blk BF Blk BUUU
dsorgPSPO PDSMemPSPO PDSMem PDSEHFSHFS
devicediskdiskotherdiskHFSHFS
blocksize614461080640000
maxreclen10248080640010241024
statccsid00000819
file size00004780824

To read a record, you should use the maxreclen size. This may not be the size of the data record but it is the best guess.

It look like the maxreclen for Unix files is 1024 – I guess this is the page size on disk.