Python creating a callback for an asynchronous task in an external function.

At the high level, I wanted a Python program running as a started task on z/OS to be able to catch operator requests, such as shutdown. I thought the solutions I came up with were a little complex for what I wanted, then I saw an example of using callback which did “After a period of time, invoke this function with these parameters”. Could this be adapted to provide “call this Python function when an operator issues a command to the started task?” As usual it got me into areas I was unfamiliar with, but the answer is yes it can be adapted.

Background

The interface for an application to be notified of an operator request is the z/OS QEDIT interface. There is an Event Control Block(ECB) which gets posted when there is data for it. An application can wait on this ECB.

There are several approaches that can be taken for a (Python) program

  • Have an application loop round checking the ECB to see if it has been posted. If it has been posted, issue a WAIT on the ECB, which will wake up immediately; get the message and return. This would work, but how long do you wait between loops? The smaller the time, the more frequently you scan, and so use up more CPU.
  • Have a thread which waits to be posted. The thread wakes up and notifies the application.
    • Python has an ASYNC interface where applications can multithread on one thread. The code has to be well behaved. It has to give up control to the main thread when it has no work to do. It the (single) thread does an operating system wait, all work stops until the wait completes. This approach will not work as the thread has to wait for the ECB.
    • Use a thread from the Python thread pool. You can get a thread from Python, which can wait for the ECB. This thread has to be well behaved and release the Global Interpreter Lock (GIL) (which controls Python multi programming). An application can only update Python data when it has the GIL. It prevents problems with concurrent access to fields.
    • Use a thread which is not from the Python task pool. This thread can callback into Python and run a function.

This blog post is about the last item in the above list; using a thread which is not in the Python thread pool, to call back into a function in the main Python program.

High level view of the program

There are several “moving parts” to the program.

  • A Python external function which is passed the Python function and any parameters for the function. This external function creates a z/OS thread and passes the Python function name and its parameters to the thread.
  • Register a Python shutdown clean-up exit, to wake up or cancel the async thread when the Python program finishes.
  • The C program which runs as an independent thread (also known as a subtask or TCB). It registers the thread with Python (and gets the GIL lock) then loops:
    • Release the GIL lock
    • Waits for the QEDIT ECB to be posted
    • Get the GIL lock
    • Builds the parameter list of the data received
    • Calls the python function passing the original data, and the received data
  • The Python function is passed the original parameters, and the data from the request. The Python function can add data to a queue, update Python variables and can enable an Python event. The main task can waiting on this event, and so process the requests when they come in.

The main python program

The handle = zconsole.acb(ccp_cb,[exit_event]) creates the async thread, and returns a handle. The handle is used to cancel the outstanding wait.

There is code to update variables in a thread safe manner by using a threadlock.

An event is used to signal completion.

import zconsole as zconsole 
...
# This is the callback function which gets control from the C program
def ccp_cb(args,QEDIT_data) : 
      global stop   # set this to stop to 1 to end processing
      global global_counter  # increment this 
      parms = args[1] # [functionName,[parms]) 
      e  = parms[0]   # event 
      with threadLock: 
          global_counter += 1 
      print("qedit",QEDIT_data) # display what we received
      if QEDIT_data["verb"] == "Stop": 
         stop = 1 
      e.set() # post event - wake up main 
###############################################
threadLock = threading.Lock() # for serialisation of updates
exit_event = threading.Event() # for event processing
# wait for up to 30 seconds at most 4 times
# initiate the console wait. using Asynchronouse CallBack
handle = zconsole.acb(ccp_cb,[exit_event]) 
# This returns a handle.
for i in range (0,3):  # at most 4 times
   exit_flag.wait(timeout=30) # set 30 seconds time out                                          
   if (exit_flag.is_set() == False): # we timed out
       break 
   print("GlobalCounter",global_counter) 
   print("stop",stop) # debug info
   if stop == 1: 
      break 
print("after stop ",stop)
zconsole.cancel(handle) #  stop the async task  

The external function zconsole.acb(function,[parms])

The external acb (asychronous call back) function (written in C) has code

  • to read the parameters passed to the function
  • increment the use count of the python fields to prevent Python from freeing them. The async thread decrements the use-count
  • attaches a thread to run a program (called cthread).
...
pthread_t thid; 
PyObject * method = NULL; 
PyObject * parms  = NULL; 
// get the data as objects
if (!PyArg_ParseTuple(args,"OO",
    &method,   // function
    &parms ))  // parms
{  /// problem?
    return NULL;
} 
...
// zargs is used to hold the parameters
zargs -> method = method; 
zargs -> parms  = parms; 
// the following are decremented within the Async thread
Py_INCREF(zargs -> parms;  /* Prevent it from being deallocated. */ 
Py_INCREF(zargs -> method);/* Prevent it from being deallocated. */
// create the thread
rc = pthread_create(&thid, NULL, cthread, zargs);  

The async C thread to process the QEDIT data

This program

  • is passed the parameter list containing the Python function Object, and the Python function parameter list object
  • releases the GIL
  • executes the assembler program which waits on the QEDIT ECB
  • when this returns, it gets the GIL
  • builds a dictionary of parameters (“name”:”value”,…) from the QEDIT data
  • calls the Python function passing the function object, the parameters passed to the external function, and the dictionary of parameters from the operator request (from QEDIT).
void * cthread(void *_arg) { 
  struct thread_args * zargs  = (struct thread_args *) _arg ; 
                                                                           
  PyGILState_STATE gstate; 
  PyObject *rv = NULL;  // returned object 
  PyObject *x  = NULL;  // returned object 
  char * ret  = 0; 
  long  rc; 
  int stop = 0; 
  rc = 0; 

  // register this thread to Python                                                                            
  gstate = PyGILState_Ensure(); 
  loop{

    Py_BEGIN_ALLOW_THREADS 
    //   QEDIT waits to be posted and returns the data    
    rc = QEDIT( pMsg);  // assembler function
    // get the GIL and stop any other work
    Py_END_ALLOW_THREADS 
    ...
    // convert console name from EBCDIC to ASCII
    __e2a_l(  pCIBX ->consolename ,8 ); 
    // build the parameter list to pass to Python function                                                                 
    rv = Py_BuildValue("{s:i,s:s,s:s#,s:s#,s:y#,s:y#,s:y#}", 
           "rc", rc, 
           "verb", pVerb, 
           "data",&(pCIB -> data[0]),lData, 
           "console",&(pCIBX -> consolename),l8, 
           "cart",&(pCIBX -> CART),l8, 
           "consoleid",&(pCIBX -> consoleid),l8, 
           "oconsoleid",&(pCIBX -> consoleid),l8); 
                                                                     
   Py_INCREF(rv);    /* Prevent it from being deallocated. */ 
   //  Call the Python function
   x = PyObject_CallFunctionObjArgs( zargs -> method,zargs -> a1,rv      , NULL); 

   if ( x != NULL) 
      Py_DECREF(x       );    /* Prevent x from being deallocated. */ 
   if (stop >0) 
   { 
      //printf("Stop found - cthread exiting \n"); 
     break; 
   } 
} // end of main loop
if ( zargs -> a1  != NULL) 
  Py_DECREF(zargs -> a1);    /* allow it to be deallocated. */ 
if ( zargs -> method  != NULL) 
   Py_DECREF(zargs -> method);  /* Alllow it to be deallocated. */ 
pthread_exit(ret); 
return 0; 
                                                                       

Ending the thread

A thread running asynchronously needs to end when the caller end. If it stays running you will get a system abend A03.

You have a choice

  • Pass a “shutdown ECB” to the thread, and have the thread wait on an ECBLIST (shutdown ECB, and QEDIT ECB). The high level application can then post this ECB. I had an external function zconsole.cancel(handle). This got the address of the ECB from the parameter, and posted it
  • Cancel the thread. I had an external function zconsole.cancel(…). This was passed the thread-id, and issued pthread_cancel(thread-id). In the end I used the shutdown ECB as it was cleaner.

I found it best to use a class for my thread, and register for a function to be called at the Python program shutdown.

For example

class console: 
    handle = None 
    def __init__(self,a): 
       print("console.__init__",a) 
    def cb(self,a,b): 
       # call the function to create the async task
       # and return the handle
       self.handle =  zconsole.acb(a,b) 
       #register cleanup for shutdown 
       atexit.register(self.cleanup,self.handle) 
                                                                                                    
    def cleanup(self,handle): 
       print("IN CLEANUP") 
       if handle != None: 
          zconsole.cancel(self.handle) 

This says when the cb function is called to set up the callback, add this object and the cleanup routine to the list of “shutdown” activities. The cleanup function, tells the async thread to shutdown.

How do you know the thread has ended?

You can use code like pthread_cleanup_push and pthread_cleanup_pop to call an ending function. This function is called when the thread:
• Calls pthread_exit()
• Does a return from the start routine
• Is cancelled because of a pthread_cancel()

In your cleanup routine you need to check for locks and other resources owned by the thread, and release them.

PyGILState_STATE gstate; // referred to from cthread and cleanup
void cleanup(void * arg)
{
   printf("Thread was cancelled!\n\n"); 
   int s = PyGILState_Check();
   printf("chthread Python latch %d\n",s);
   // release the lock if we have it
   if (s)      
      PyGILState_Release(gstate);
}
void * cthread(void *_arg) {
  pthread_cleanup_push(cleanup,NULL);
  struct thread_args * tA = (struct thread_args *) _arg ;
  ...

  pthread_cleanup_pop(0);
  pthread_exit(ret);

}


Why adding a printf caused my program to hang

Or “how to cancel a pthread safely; and reverse time”

I was doing some work with external Python functions, and attaching a subtask to intercept operator requests. It was very frustrating when I added a printf to the C program to provide diagnostic information – and the program did not produce any output even from a previous printf(spooky). Remove the printf and it worked including the earlier print(“Starting”) before my new printf.

After a couple of days, and some long walks I found out the reason why. It was all down to my lack of knowledge about what is available with pthreads, and locking.

Python has a lock to serialise work. While a thread has this lock, no other thread can do any Python work.

An attached thread can be configured as to how it responds to a cancel request. For example you may not want to cancel the thread in the middle of a critical update, for example while holding a lock.

By default it looks like threads are non-cancellable, unless you allow for it.

When I ran my job, there was an abend A03 A task tried to end normally by issuing a RETURN macro or by branching to the return address in register 14. The task was not ready to end processing because …: The task had attached one or more subtasks that had not ended.

The task needs to be told to shutdown – or to respond to a cancel thread.

Creating a thread

struct thread_args {
   PyObject *method;
   ...
   } 
#define _OPEN_THREADS 2 
#include <pthread.h>
//create a structure to pass parameters to the thread.
struct thread_args *zargs = malloc (sizeof (struct thread_args));
zargs -> method = method;
...
pthread_t thid; 
int rc; 
// invoke pThread to create thread and pass the parms through 
rc = pthread_create(&thid, NULL, cthread, zargs); 
if (rc != 0) { 
  printf("pthread rc %d \n", rc); 
  perror("pthread_create() error"); 
} 

To cancel a thread

The short answer to how to cancel a thread is

rc = pthread_cancel(thid);
if ( rc != 0) 
{
   perror("Trying to cancel the thread");
}

Return code 0 means the request to cancel the thread was successfully issued, but it does necessarily mean the thread has been cancelled, because the thread could be set as non- cancellable.

Within the thread program.

You can configure the program running as a thread to be cancellable:

  • Not cancellable – the default
  • Cancellable
    • At this point
    • At any time.
    • Not between these instructions

To make a thread non cancellable

int previous = pthread_setintr(PTHREAD_INTR_DISABLE);

You can use the returned variable to reset the status with pthread_setintr(previous).

To make a thread cancellable at this point

Set up the thread. Do pthread_setintrtype before pthread_setintr to eliminate a timing window.

// Specify how it is interruptible, any time, or controlled
if (pthread_setintrtype(PTHREAD_INTR_CONTROLLED ) == -1 )
{ perror(“error setting pthread_setintrtype”);… }

// Say it is interruptible
int previous = pthread_setintr(PTHREAD_INTR_ENABLE);

The initial values are

  • pthread_setintrtype is PTHREAD_INTR_CONTROLLED (0)
  • pthread_setintr is PTHREAD_INTR_ENABLE(0)

So you may not need to use the pthread_setintr* functions.

The thread needs an “interruptible” function.

The documentation says

PTHREAD_INTR_CONTROLLED:
The thread can be cancelled, but only at specific points of execution. These are:

  • When waiting on a condition variable, which is pthread_cond_wait() or pthread_cond_timedwait()
  • When waiting for the end of another thread, which is pthread_join()
  • While waiting for an asynchronous signal, which is sigwait()
  • When setting the calling thread’s cancel-ability state, which is pthread_setintr()
  • Testing specifically for a cancel request, which is pthread_testintr()
  • When suspended because of POSIX functions or one of the following C standard functions: close(), fcntl(), open(), pause(), read(), tcdrain(), tcsetattr(), sigsuspend(), sigwait(), sleep(), wait(), or write().

In my thread I had used the interruptible function pthread_testintr().

printf(“before testcancel\n”);
pthread_testintr() ;
printf(“after testcancel\n”);

When my code was running I had

before testcancel
after testcancel

before testcancel
after testcancel

pthread_cancel() was issued and the output was

before testcancel

So we can see the code was behaving as expected,and was cancelled inside/at the pthread_testintr() function.

To make a thread cancellable at any time

if (pthread_setintrtype(PTHREAD_INTR_ASYNCHRONOUS ) == -1 )
{ perror(“error setting pthread_setintrtype”);… }
int previous = pthread_setintr(PTHREAD_INTR_ENABLE);

If you are using this you need to design the code so the thread has no locks or mutexes. These will not be released automatically.

To make a thread not cancellable between these instructions

pthread_setintrtype(PTHREAD_INTR_ASYNCHRONOUS)
pthread_setintr(PTHREAD_INTR_DISABLE)
// thread non cancellable

get a lock
do some work
free a lock

pthread_setintr(PTHREAD_INTR_ENABLE);
// thread now cancellable any point after this

The pthread_setintr(PTHREAD_INTR_ DISABLE|ENABLE) code protects the non cancellable code.

The pthread_setintrtype(PTHREAD_INTR_ASYNCHRONOUS) says that outside of the non-cancellable code it can be cancelled at any point when interrupts are enabled.
Instead you could use pthread_setintrtype(PTHREAD_INTR_CONTROLLED ) and pthread_testintr(), to make your code interruptible at a specific point.

It is not spooky.

When running my code. I initially had it running so it was interruptible anywhere.

What was happening was

  • get python lock
  • get interrupted. Thread ends

By adding a printf to my code, it changed where the thread was interrupted. With the printf – it was interrupted while the Python lock was held, the thread was cancelled with the lock still held, and no other Python work ran.

Without the additional printf, the thread abended without the Python lock from being held.

By putting the pthread_ calls around the code with the lock I could make sure the lock was released before the thread ended.

Spooky lack of printing

The Python program had used print(“starting”), but this was written to the print buffers, it was not forced out to disk.

When I used Python print(“starting”,force=True) the data was forced out before progressing.

The C function is fflush(stdout);

Overall – not spooky at all, just a lack of understanding.

Running in parallel in Python on z/OS

I wanted to have a long running started task with Python acting as a server. As part of this I needed to wait on more than one event. This proved to be a hard challenge to get working.

Background

On z/OS a “process” is an address space, and a thread is a TCB.

There are several Python models for doing asynchronous work, and waiting for one or more events.

  • Multi processing. One thread acting as a dispatcher. “threads” are put on the work queue when they are ready to run, and taken off the work queue when they are waiting. Just like an operating system. This is the asyncio model.
  • Using multiple thread for the application. This is the ThreadPoolExecutor.
  • Using different address spaces for the application. This is the ProcessPoolExecutor.
  • Create threads within an extension function.

Information

I found the following very useful

Background knowledge

It took me a couple of days to get my parallel processing program to work. Even when I understood the concepts I still go it wrong, till I had a flash of understanding.

The Python Global Interpreter Lock (GIL)

To understand how Python works especially with multiple concurrent tasks you need to understand the Python Global Interpreter Lock.

Python code is not threadsafe, it is pseudo threadsafe. Rather than have to worry about concurrent access to data, and different threads being able to read and change data at the same time, Python allows only one application to run at a time. It uses a global lock for this. Your application gets the lock, does some work, and releases the lock. With a simple application this is invisible. When you try to develop an application with parallel “threads” you need to understand the lock.

My first operating system

When people start writing an operating system from scratch they may have logic like

  • Start the I/O
  • Spin in an instruction loop waiting for the I/O to complete
  • Do some more work

If you have only one processor in your system, no other work is done while waiting for the I/O to complete.

My second operating system

Having written your first operating system, the next operating system is more refined and has logic like

  • Start the I/O
  • Give up control – but resume the application when the I/O completes
  • Resume from here.

In this case even with just one processor in your system, it can do lots of other work while the I/O is in progress. That application instance is suspended until the I/O completes.

The same principles apply to Python.

Python concurrent processing models

As well as the “single threading” standard Python program, Python supports 3 concurrent processing models

  • One thread in one process (one address space). It can support concurrent bits of application as long as they cooperate while they are waiting for something. This is known as the asyncio model.
  • Multiple threads in one process (one address space). A typical use of this is CPU intensive threads, or operating systems waits. Conceptually there is no cooperation with Python waiting. This is known as the ThreadPool model.
  • One or more threads in multiple processes (Multiple address spaces). This is known as the ProcessPool model. I cannot see many usage cases for this model.

I’ll give you an exercise to help you understand the processing.

async def cons(name):
   print(name, "start",datetime.utcnow().strftime('%Y-%m-%d %H:%M:%S.%f'),flush=True)
   time.sleep(10) 
   print(name,"stop",datetime.utcnow().strftime('%Y-%m-%d %H:%M:%S.%f'),flush=True)
   return (42)
w = asyncio.create_task(cons("A"))
c = asyncio.create_task(cons("B")
done, pending = await asyncio.wait([c,w],return_when=asyncio.ALL_COMPLETED)

The above code (based on examples from the web) creates two asynchronous instances which allow you to run them in parallel. The instance is created with the create_task, and the wait for them both to complete is in the asyncio.wait([]) function.

Does it print out

A start 16:00:00
B start 16:00:00
A stop 16:00:10
B stop 16:00.10

Or

A start 16:00:00
A stop 16:00:10
B start 16:00.10
B stop 16:00:20

Full marks if you chose the second one. This is because time.sleep(10) does not give up control. It runs, waits, ends, and only then can B run.

If we replace time.sleep(10) with await asyncio.sleep(10). This “sleep” function has been enhanced with cooperation or “give up control”. When this is used, you get the first output, and both finish in 10 second.

From this I learned that not all Python functions are designed for running in parallel.

By displaying information about what was running, I could see that both instances were running on the same thread(TCB).

Using multiple TCBs.

I wrote an extension which waited for a z/OS console event. I had a “console” routine, and a “wait” routing in the Python program

When I used the asyncio model, there is only one task (TCB). All work was suspended while the my z/OS wait was in progress. As soon as this finished, other work ran. In this case using the asycnio model, and my external function doing an operating system wait, did not work.

I then switched to the ThreadPool model, so I could use one TCB for the z/OS wait thread, and the other Python work could run on a different TCB.

However this appeared to have the same problem. No work was done while the z/OS wait was in progress.

This was because of the Global Interpreter Lock. My thread was dispatched holding the GIL across the wait, so no other Python work ran – it was all waiting for the GIL.

To fix this I had to change my program to “cooperate” with Python and release the lock when it was not needed. In my C program I used

Py_BEGIN_ALLOW_THREADS
rc =ProgramToWaitForOperatorData()
Py_END_ALLOW_THREADS

  • Py_BEGIN_ALLOW_THREADS says give up the Python lock.
  • Py_END_ALLOW_THREADS says I’m ready to run – please give me the Python lock.

With this small coding fix, I got my parallelism.

From this I learned that you need to worry about the Global Lock if your Python Extension issues a wait, or can be suspended.

More information on coding with Asyncio

This model has one task which does all of the work. To successfully work in this environment, they need to use “cooperative function”. For example “await asyncio.sleep(2)” instead of the uncooperative “time.sleep(2)”. Extensions must not use long waits. If the extension waits, everything waits.

Minimum setup

You need

  • import asyncio at the top of your program
  • asyncio.run(main2()) to actually run your function (main2) in asyncio mode.

For example

import asyncio
# The following is defined as a function - but it does all the work
async def main2():
    ... 
#  This runs the above routine as an async thread.
asyncio.run(main2()) 

I defined the mywait function. It is passed an event so it can post (set) it and wake up the caller.

async def mywait(event): 
     print("WAIT Start",datetime.utcnow().strftime('%Y-%m-%d %H:%M:%S.%f'),flush=True) 
     time_event = threading.Event() 
     for i in range(0,4): 
        time_event.wait(10) # every 10 seconds
        print("WAIT Woke ",datetime.utcnow().strftime('%Y-%m-%d %H:%M:%S.%f'),flush=True) 
        if event.is_set(): 
           print("WAIT Event",datetime.utcnow().strftime('%Y-%m-%d %H:%M:%S.%f'),flush=True) 
           break 
     print("WAIT STOP ",datetime.utcnow().strftime('%Y-%m-%d %H:%M:%S.%f'),flush=True) 
     event.set() 
     return 44 

To create an asynchronous thread and start it running, use

w = asyncio.create_task(mywait(event),name=”MYWait”)
print(“W”,w)

gives

W <Task pending name=’MYWait’ coro=<mywait() running at /u/tmp/console/AS2.py:18>>

This means

  • It it a Task
  • It is pending execution (not finished running yet)
  • The name is ‘MYWait’
  • The routine is a function “mywait()”
  • from at /u/tmp/console/AS2.py:18

To wait for one or more tasks to complete use

done, pending = await asyncio.wait([c,w],return_when=asyncio.ALL_COMPLETED)

You give a list of the threads [c,w] and specify when you want it to return

  • return_when=asyncio.ALL_COMPLETED
  • return_when=asyncio.FIRST_COMPLETED

This returns a list of the tasks (done) which have finished, and a list of those (pending) which have not finished yet.

You can use

if c in done:  
    print(c.result()) 
    do something else

My console routine is defined

async def cons(event):
print("CONS start",datetime.utcnow().strftime('%Y-%m-%d %H:%M:%S.%f'),flush=True)
await asyncio.sleep(2) # do something which cooperates
print("CONS Stop ",datetime.utcnow().strftime('%Y-%m-%d %H:%M:%S.%f'),flush=True)
return (42)

Coding with ThreadPoolExecutor

With ThreadPoolExecutor you setup a thread pool. Any requests that are created, use a thread from this pool. If there are no available threads, the request is delayed until a thread is available.

A thread can use an operating system sleep but extensions need to release and obtain the Python GIL lock.

Minimum setup

You need

  • import concurrent.futures at the top of your program
  • executor = concurrent.futures.ThreadPoolExecutor(max_workers=3) to create a thread pool.

To create an asynchronous thread and start it running with function “mywait” use

w = executor.submit(mywait,parm1,parm2)

Note: This is different to the asyncio model where you passed mywait(parm1,parm2) .

print(“W”,w) gives

W < Future at 0x5008b9a4c0 state=running>

To wait for one or more tasks to complete use

done, pending = concurrent.futures.wait( [w,c], return_when=concurrent.futures.FIRST_COMPLETED)

You give a list of the threads [c,w] and specify when you want it to return

  • return_when=asyncio.ALL_COMPLETED
  • return_when=asyncio.FIRST_COMPLETED

This returns a list of the tasks (done) which have finished, and a list of those (pending) which have not finished yet.

You can use

if c in done:  
    print(c.result()) 
    do something else

The routine is defined

def cons(event):
print(“CONS start”,datetime.utcnow().strftime(‘%Y-%m-%d %H:%M:%S.%f’),flush=True)

print(“CONS Stop “,datetime.utcnow().strftime(‘%Y-%m-%d %H:%M:%S.%f’),flush=True)
return yy

ProcessPoolExecutor

I cannot see many uses for the ProcessPoolExecutor model. This runs threads in different address spaces. It makes sharing of information (such as program variables) much harder.

The basic programs is like

import concurrent.futures
def cons():
    zconsole.put("CONS TASK") 
    # do something involving a long wait
    return x 
def foo(a):
    zconsole.put("FOO TASK") 
    # do something involving a long wait
    return z 
zconsole.put("MAIN TASK") 
executor = concurrent.futures.ProcessPoolExecutor(max_workers=3) 
w = executor.submit(foo2,"parameter1") 
c = executor.submit(cons) 
done, pending = concurrent.futures.wait([w,c],return_when=concurrent.futures.FIRST_COMPLETED) 
if c in done: 
   print("cons task finished:  result",c.result()) 
   

The output on the z/OS console included

S PYT
IEF695I START PYT WITH JOBNAME PYT IS ASSIGNED TO USER START1
STC06801 +MAIN TASK
IRR812I PROFILE * (G) IN THE STARTED CLASS WAS USED
TO START BPXAS WITH JOBNAME BPXAS.
IEF403I BPXAS – STARTED – TIME=06.29.05
BPXP024I BPXAS INITIATOR STARTED ON BEHALF OF JOB PYT RUNNING IN ASID
0045

IRR812I PROFILE * (G) IN THE STARTED CLASS WAS USED 617
TO START BPXAS WITH JOBNAME BPXAS.
IEF403I BPXAS – STARTED – TIME=06.29.05
BPXP024I BPXAS INITIATOR STARTED ON BEHALF OF JOB PYT RUNNING IN ASID
0045

IEF403I BPXAS – STARTED – TIME=06.29.06
BPXP024I BPXAS INITIATOR STARTED ON BEHALF OF JOB PYT RUNNING IN ASID
0045
STC06802 +FOO TASK
STC06803+CONS TASK

Where 3 address spaces were started up, and the three Write To Operator requests are shown in bold, each coming from a different address space.

It takes a second or so to start each address space, so the start up of this approach is slower than using the thread model.

How it works

Your program is run in each address space.

You need to have

def main2:
    ....

if name == 'main':
   main2()

You need the “if name == ‘main'” to prevent the “main” starting in all the address spaces.

You can pass data to the asynchronous object for example

w = executor.submit(foo2,"parameter1") 

I do not think the objects are shared between different address spaces, so I think you need to treat these asynchronous functions as an opaque box. You give it data at start time, and you get the result when it has finished.

With the asynio and the ThreadPoolExecutor, they both run in the same address space, so an Python Object is available to all functions and threads.

Creating a thread in an external function

You can create a thread from an external function, so you are responsible for creating and ending the threads.

These threads can use Python services, such as call back to execute a Python function, or access variables and other information.

Your thread needs to register to Python using PyGILState_Ensure()… PyGILState_Release(). The thread has the GIL, and this must be given up when the thread is doing non Python work, and acquired when doing anything with Python.

PyGILState_STATE gstate;
gstate = PyGILState_Ensure();  // this gets the GIL
...
//  Give up the Python GIL lock
Py_BEGIN_ALLOW_THREADS
...
do some non Python work including wait
// Need to do some Python work, so get the GIL
Py_END_ALLOW_THREADS
Py_BuildValue....

PyGILState_Release(gstate);
pthread_exit(0);

You are responsible for terminating the thread at shutdown. This can be done using pthread_cancel(), or passing a request to the thread saying it needs to end.

Stopping a server cleanly

I had successfully got Python running as a server in z/OS started task. The next job I had was to be able to shut it down cleanly. This was much harder than I expected. The concepts apply to all servers – not just a Python Server.

My mission.

Now that I can run a Python server as a started task. How do I stop it? I want

  • A thread waiting on the operating system to notify the task when a shutdown request or operator request arrives.
  • or after 20 seconds of no activity (during prototyping)

Python has no capability to cancel a thread once it has started running. There is a thread.cancel() which will remove the work request from the “list of work to run” before it has been scheduled.

My first attempt failed.

  • I created an operator task which goes to sleep, and is woken up when a request arrives
  • and a timeout task. This sleeps for 20 seconds and returns.

My first attempt was to wait for either of these task to complete.

Case 1: The operator entered a command

  • The operator task woke up, and signalled shutdown.
  • The time out task carried on waiting till the end of the 20 seconds.
  • The system then shutdown

Case 2: There was no shutdown command, the request timed out

  • The time-out task woke up and signalled shutdown.
  • The operator task carried on waiting. It never returned.
  • I had to cancel it.

My second attempt was to fix the operator task

I changed the operator thread to pass a shutdown_request token. The task waits for either

  • the operating system to signal a command was entered,
  • or the “shutdown_request” was made.

The helped with case 2:

  • The time out task woke up and signalled shutdown.
  • The shutdown_request was posted (to the operator thread).
  • The operator thread woke up, and ended.
  • The server shut show cleanly – success!

My third attempt was to fix the time out task. This was slightly better.

I changed the time-out task to pass a shutdown_request token. I could not get the time-out task to wait on two events.

  • When the operator task command is woken, it notifies the time_out task. through the shutdown_request token
  • The time out task sleeps for 5 seconds then wakes up
  • If the shutdown request has been made, then leave.

My third attempt can delay up to 5 seconds before shutting down, so the solution is better – but not perfect. Is there a better way? To make it shutdown faster you could decrease the internal sleep period. This means it wakes up more frequently and so uses more CPU while doing nothing constructive.

My fourth attempted worked, by using a timer

I used a timer event instead of waiting.

def callb(handle):
    print("CALLBACK",datetime.utcnow().strftime('%Y-%m-%d %H:%M:%S.%f'),flush=True)
    # tell the operator task to close down. 
    zconsole.kill(handle)

h  = zconsole.init() 
t = threading.Timer(30,callb,[h]) 
print("SCHEDULE ",datetime.utcnow().strftime('%Y-%m-%d %H:%M:%S.%f'),flush=True) 
t.start() 
...
t.cancel()  #to remove the request

Where

  • 30 is the delay before starting, in seconds
  • callb is the name of my function to be called when the timer “pops”
  • [h] is the parameters passed in. You must specify a list of parameter [..]. Without it I got the message TypeError: callb() argument after * must be an iterable, not int.

The output was

SCHEDULE 2022-07-05 16:50:00.202267
CALLBACK 2022-07-05 16:50:30.208368

This worked!

This timer request can be cancelled by using the t.cancel() before it is scheduled. Once it has been scheduled, it only lasts for a few milliseconds.

Summary

if you want an application or server to be able to respond to different events you need.

  • To be able to cancel a thread while it is executing (if available) – or be able to send it a “shutdown” request.
  • It is better to schedule an event using a timer, than to have a thread waiting in a sleep. The scheduled event can be cancelled before it executes. A sleep cannot be cancelled.

Running Python as a started task on z/OS

Setting this up was relatively easy. I used JCL

// PROC P=’cons’
// SET PY=’/usr/lpp/IBM/cyp/v3r8/pyz/bin/python3′
// SET PR=’/u/tmp/console’
//*
//STEP1 EXEC PGM=BPXBATSL,REGION=0M,TIME=NOLIMIT,MEMLIMIT=NOLIMIT,
// PARM=’PGM &PY &PR/&P..py’
//STDOUT DD SYSOUT=*
//STDERR DD SYSOUT=*
//SYSDUMP DD SYSOUT=*
//CEEDUMP DD SYSOUT=*
//STDIN DD DUMMY

As part of this work, I also developed zconsole which allows a program to get the parameters on the start command, any modify command (and the data), and the stop command.

I tried using print to //SYSOUT2, but was unsuccessful. I could write to a Unix file, but not to a “DD:…” statement.

Following on from a suggestion from Peter Sylvester I used AOPBATCH, and got that working as well.

// SET PY=’/usr/lpp/IBM/cyp/v3r8/pyz/bin/python3′
//*
//AOP EXEC PGM=AOPBATCH,PARM=’/&PY. //DD:STDIN P1 P2′
//STDIN DD PATH=’/u/tmp/zos/z.py’
//STDENV DD *
PATH=/usr/lpp/IBM/cyp/v3r8/pyz/bin:/u/tmp/zos
LIBPATH=/u/tmp/zos
PYTHONPATH=/u/tmp/zos
//STDOUT DD SYSOUT=*
//STDERR DD SYSOUT=*
//STDOUT DD SYSOUT=*
//…

This read from the Unix file in STDIN.

The parameters passed to the Python script are P1 P2

The best I/O is no I/O

In the course of an email exchange there was discussion about the performance of z/OS where the DASD was involved in an active-active environment – so every write I/O is mirrored over a network. There was a discussion about avoiding disk I/O for work files. VIO refers to data set allocations that exist in paging storage only. z/OS does not use a real device unless z/OS must page out the data set. If course you need enough real storage so you do not page!

In ISMF you can define a storage group, type of VIO which uses Virtual I/O.

I have a Storage Group of SGVIO which says use VIO if the data set size is less than 2000000 KB. If more than this is needed it will use DASD.

If you are in a mirrored environment, and you have DASD volumes which are just used for temporary files, or paging then these volumes do not need to be mirrored. (But you may want to mirror them in case some one puts a non temporary data set on the volume).

Fixing Python setup.py

I had been using setup.y to build some external modules for Python on z/OS. Unfortunately deep down in the configuration, the wrong parameters were being used, and I was unable to fix the problem.

Thanks to Steven Pitman who gave me a bypass.

By overriding the build_ext function I was able to remove the unwanted compiler options. I wanted to remove the -fno-strict-aliasing and ‘-Wa,xplink’ options.

You can do it with

if '-fno-strict-aliasing' in self.compiler.compiler_so: 
       self.compiler.compiler_so.remove('-fno-strict-aliasing') 

As shown in the code below. The extra code is in the bold font.

The code cmdclass = {‘build_ext’: BuildExt}, causes my function to be executed.

import setuptools 
from setuptools import setup, Extension 
import sysconfig 
import os 
import sysconfig 
import os 
os.environ['_C89_CCMODE'] = '1' 
from setuptools.command.build_ext import build_ext 
from setuptools import setup 
class BuildExt(build_ext): 
   def build_extensions(self): 
     print(self.compiler.compiler_so) 
     if '-fno-strict-aliasing' in self.compiler.compiler_so: 
       self.compiler.compiler_so.remove('-fno-strict-aliasing') 
     if '-Wa,xplink' in self.compiler.compiler_so: 
        self.compiler.compiler_so.remove('-Wa,xplink') 
     super().build_extensions() 
...
setup(name = 'console', 
   ...
   cmdclass = {'build_ext': BuildExt},
   ext_modules = [Extension('console.zconsole',['console.c'],
   ... 

Using pthread_create to use subtasks.

I wanted to use the C pthread_create interface to attach a subtask. This took a few hours to get right ( mainly because I do not have the Ph’d level of C coding). I could get it to work with a simple parameter like a string, but not when passing a structure.

I thought I’d document if for others trying to use it – and for me, when I want to use it again in 6 months time, and I’ve forgotten how to do it.

The calling program

struct thread_args {
char a[8];
char *method;
….
} ;

// create a dynamic structure

struct thread_args *zargs = malloc (sizeof (struct thread_args));

// initialise it

memcpy(&zargs -> a[0],”01234567″,8);
zargs -> method = “method”;

// attach a thread, and call “cthread” function, passing zargs
rc = pthread_create(&thid, NULL, cthread, zargs);

My attached program (cthread)

void * cthread(void *_arg) {
struct thread_args * tA = (struct thread_args *) _arg ;

printf(“Inside cthread %8.8s\n”, tA -> a);

char * ret = 0;
pthread_exit(ret);
return 0;
}

It is easy when you have a working example!

Spice up a C program by using __asm__ to include inline assembler.

The C compiler on z/OS has an extension which allows you to put assembler code inline within a C program. This function can be useful, for example accessing z/OS macros. __asm__ is very badly documented in the publications, but this post gives a good overview.

Overall the use of __asm__ works, but you have to be careful. For small bits of assembler it was quicker to use __asm__ instead of creating a small assembler program and linking that with the C program.

This blog post document some of my experiences.

Using and compiling the code

You put code in __asm__(…); , _asm(..); or asm(..); . I think these are all the same.

To use macros or copy files within the code you need the ASMLIB statement in your JCL.

//S1          JCLLIB ORDER=CBC.SCCNPRC 
//STEP1    EXEC PROC=EDCC,INFILE='COLIN.C.SOURCE(ASM)', 
//         LNGPRFX='CBC',LIBPRFX='CEE', 
//         CPARM='OPTFILE(DD:SYSOPTF)' 
//COMPILE.ASMLIB DD DSN=SYS1.MACLIB,DISP=SHR 

Basic function

The asm() instruction has the following parts

  • asm(
  • “a string of source which can contain %[symbolname] “. Each line of assembler has “\n” at the end of the line.
  • the output code will be formatted to conform to normal HLASM layout standards.
  • “:” a list of output symbols and their mapping to C variables.
  • “:” a list of symbols used as input and their mapping to C variable names.
  • “:” a list of register that may have been changed (clobbered) by this code, for example “r14” and “r15”.
  • “);”

Example code

__asm__(
“*2345678901234567890xxxxxxxxxxxxxxxxxxxxxxxx\n”
” WTO ‘%[PARMS]’ \n”
:
:
[PARMS]”s”(“zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz”)
: “r0”, “r1”, “r14”, “r15”
);

The PARMS statement is a string with a value ZZZZZ… It is used in the WTO ‘%[PARMS]\n’ statement.

Long statements – wrapping and continuation

The generated code from the above statement is

*2345678901234567890xxxxxxxxxxxxxxxxxxxxxxxx                             000023  
         WTO   'zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzX 000023  
               zzz'                                                      000023  

We can see

  • the *234… starts in column 1
  • the WTO instruction is in column 10
  • because the string was very long, it has been split at column 71 and wrapped onto the next line at column 16.
  • A continuation character was inserted at column 72

This means you do not need to worry too much about the formatting of the data.

The code looks a bit buggy.

Making the WTO into an operand and a comment

__asm__(
“*2345678901234567890xxxxxxxxxxxxxxxxxxxxxxxx\n”
” WTO abc ‘%[PARMS]’ \n”
:
:
[PARMS]”s”(“zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz”)
: “r0”, “r1”, “r14”, “r15”
);

Gives a warning message

*2345678901234567890xxxxxxxxxxxxxxxxxxxxxxxx                            
         WTO   abc                     'zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzX
               zzzzzzzzzzzzzzzzzzzzzzzzzzz' 

ASMA432W Continuation statement may be in error - comma omitted from continued statement.                            

What are the __asm__ parameters?

The first parameter is a C string containing the assembler instructions. Each line ends with a “\n”. You specify substitution variables using %[name] where the name is defined later.

You can use multiple lines, for example

asm(
” LA R2,%[p1] \n”
” TIMEUSED LINKAGE=SYSTEM,CPU=MIC,STORADR=(2) \n”
:

The C compiler treats the “…” “…” as one long literal constant.

There are three sets of values between the delimiters “:” “:” “:”

  • Output variables
  • Input variables
  • General Registers which your code changes

A C variable can be used for

  • Output only. This is in the output section. This has a definition with “=”. For example [p1] “=m”(pASCB). asm() may generate code to load load the value before use
  • Input and output. This is in the output section. This has a definition with “+”. For example [i1] “+m”(data), asm() may generate code to load the value before use, and store it afterwards.
  • Input only. This is in the input section. It does not have character with its definition. For example [i2] “m”(data)
  • Dummy – but used as a register. If you specify you want a register (let asm() which register), it needs a variable either to load, or to store depending if it is write or read or both. For example [rr] “=r”(val). I defined the C variable “val”.

Using C variables in the assembler code

There is a variety of different “types” of data, from memory, to offset. I could not see the difference between them. Some gave the same output. I tended just to use “m” for memory fields.

Use of variables

// this code gets the ASCB address from low core

long pASCB;
asm(” LLGT 1,548 \n”
” STG 1,%[p1] \n”
: [p1] “=m”(pASCB)
:
: “r1”
);

This (64 bit) code

  • Clears and loads register 1 with the value in address decimal 548. (The ASCB value) .
  • It stores register 1 in the the variable %[p1]
  • [p1] is defined as
    • “=” means this field is write only
    • m is a memory address
    • (pASCB) is the variable to use. The compiler replaces this with (in my case) the value 2248(4) – the address of the variable in format offset(base regiser).
  • There was no input data
  • Register r1 was “clobbered” (meaning it was changed in my assembler code).

Using constants is not quite what I expected.

printf(“ttime %ld\n”,data);
asm(” LA 1,%[i1] ccp\n”
” LA 1,%[i2] \n”
” LA 1,%[i3]\n”
:
: [i1] “i”(“999”),
[i2] “i”(998)
[i3] “i”(“=c\’ABCD\'”)
: “r1″,”r2”
);

Gives code

 LA    1,999
 LA    2,998
 LA    2,=c'ABCD'            

Using [i2] “i”(“COLIN”) gave

LA 2,COLIN
ASMA044E Undefined symbol – COLIN

An example of using registers

David Clayford posted the following

inline bool isauth() {
   int rc;
   __asm (" TESTAUTH FCTN=1"
         : "=NR:r15"(rc)
         :
         : "r1", "r14", "r15");
   return rc == 0;

This code invokes the TESTAUTH macro which gives a return code in register 15.

The “=NR: r15” (rc) means

  • = the __ASM__ only writes, it does not read the variable
  • NR : Use the named register
  • r15 use this register
  • (rc) and store it in the variable called rc

Using generated registers – or not

You specify that you want a register allocated to you by using the type “r”.

int val = 40;
asm(” LLGT %[rr],548 pASCB \n”
STG %[rr],%[p1] ZZZZZ \n”
: [p1] “=m”(pASCB)
: [rr] “r”(val)
: “r1″,”r2”
);

The lack of a “=”, “+” or “&” in front of the “r” means read only use of the register, so load the register with the value before my code.

Produces

   LGF   r6,val(,r4,2248)   - This is generated                  
   LLGT  6,548                   pASCB           
   STG   6,2240(4)               ZZZZZ           

This code has been given register 6 to use

  • It loaded the value of val into it – because I had specified it in the list of input variables value.
  • Used the same register where-ever I had specified %[rr] .

When I had specified the register as an input/output register by

: [p1] “=m”(pASCB), [rr] “+r”(val)
:
: “r1″,”r2”

The “+” says it is read and written the output code was

     LGF      r6,val(,r4,2248)   Generated                      
     LLGT     6,548              My Code pASCB                
     STG      6,2240(4)          My Code ZZZZZ 
     LGR      r0,r6              Generated                    
     LGFR     r0,r0              Generated                    
     ST       r0,val(,r4,2248)   Generated                    

So there is code generated to load the register from val, and save the value of the last of my instruction in the variable val.

Personally, I do not think I would use the “r”, but would select my own register(s) and use them.

If I wanted to used C variables, I can specify those, and explicitly load and save them.

Some instructions do not work.

char buffer[256];

asm(
” MVC [%p1](4),548 pASCB \n”

: [p1] “=m”(buffer)
:
: “r1″,”r2”
);

This fails with

: [p1] “=m”(buffer)
CCN4454 Operand must be an lvalue.

You need to use

You need to use [p1] “=m”(buffer[0]) instead of (buffer). (But this is just normal C)

The instruction then fails because

MVC 2240(4)(4),548

Is not a valid instruction.

You need to use

char buffer[256];
asm(
” LA 1,%[p1] \n”
” MVC 0(4,1),548 pASCB \n”

:
[p1] “=m”(buffer[0])
:
: “r1″,”r2”
);

Which successfully generates

  LA r1,2240(r4,)
  MVC 0(4,r1),548

Using literals

You can use assembler literals in your code, for example

asm(
” LA 1,=C’ABCD’ \n”
:
:
: “r1”
);

This works. There is a section in the listing

Start of ASM Literals
         =C'ABCD'
End of ASM Literals

Using assembler macros

When you use a macro, you need to review the generated code, and make a note of the registers it uses, then update the “clobbers” list.

asm(
” LA 2,%[p1] \n”
” TIMEUSED LINKAGE=SYSTEM,CPU=MIC,STORADR=(2) \n”

:…

This used r14,r0,r15

There was an error

BNZ   *+8  
*** ASMA307E No active USING for operand *+8         

I had to use the following to get it to work.

long long CPUUSED;
asm(
” PUSH USING \n”
” BASR 3,0 \n”
” USING *,3 \n”

” LA 2,%[p1] \n”
” TIMEUSED LINKAGE=SYSTEM,CPU=MIC,STORADR=(2) \n”
” POP USING \n”
:
[p1] “=m”(CPUUSED)
:
: “r0″,”r1″,”r2”,“r3”,r14″,”r15″
);
printf(“TIMEUSED %ld\n”,CPUUSED);

Using assembler services from a (64 bit) C program.

I wanted to provide a Python started task on z/OS to respond to the operator Stop, and Modify commands (for example to pass commands to the program).

I wrote a Python extension in C, and got the basics working. I then wanted to extend this a bit more. I learned a lot about the interface from C to assembler, and some of the newer linkage instructions.

The sections in this post are

Some of the problems I had were subtle, and the documentation did not cover them.

C provides a run time facility called __console2 which provide a write to the console, and a read from the console.

The output from the write to the console using __console2 looks like

BPXM023I (COLIN) from console2

Thanks to Morag who pointed out you get the BPXM023i(userid) if the userid does not have READ access to BPX.CONSOLE in class(FACILITY).

With the BPXM023I prefix, which I thought looked untidy and unnecessary.

To use the Modify operator command with __console2 you have to use a command like

F PYTTASK,APPL=’mydata’

Which feels wrong, as most of the rest of z/OS uses

F PYTTASK,mydata

This can be implemented using the QEDIT assembler interface.

The journey

I’ll break my journey into logical steps.

Information on programming the C to assembler interface

There is not a lot of good information available.

Calling an assembler routine from a C program.

Setting up the linkage

A 64 bit program uses the C XPLINK interface between C programs.

To use the traditional Assembler interface you need to use

#pragma linkage(QEDIT , OS)

rc = QEDIT( pMsg, 6);

C does not set the Variable Length parameter list bit (the high bit of the last parameter to ’80…) so you cannot use parameter lists with a variable length, and expect traditional applications to work. You could always pass a count of parameters, or build the parameter list yourself.

Register 1 pointed to a block of storage, for example to parameters

00000000 203920D8 00000050 082FE3A0

which is two 64 byte addresses, the address of the pMsg data, and the address of the fullword constant 6;

The C code invokes the routine by

LG r15,=V(QEDIT)(,…,…)
BALR r14,r15

even though use of BALR is deprecated.

Allocating variable storage

The z/OS services my assembler program used, need 31 bit storage.

I allocated this in my C program using

char * __ptr32 pMsg;
pMsg = (char *) __malloc31(1024);

I then passed this to my assembler routine.

Coding the assembler routine

Assembler Linkage

The basic linkage was

BSM 14,0
BAKR 14,0

…….
PR go back

This is where it started to get complicated. The BAKR… PR is a well documented and commonly used interface.

A Branch and StacK Register instruction BAKR 14,15 says branch to the address in register 15, save the value of register 14 as the return address, and save the registers and other status in the linkage stack. The code pointed to by register 15 is executed, and at the end, there is a Program Return (PR) instruction which loads registers from the linkage stack, and goes to the “return address”.

The Branch and Stack instruction BAKR 14,0 says do not branch, but save the status, and the return address. A subsequent PR instruction will go to where register 14 points to.

Unfortunately, with the BALR code in C, and the BAKR, PR does not work entirely.

You can be executing in a program with a 64 bit address instructions (such as 00000050 089790A0), in 24 or 31, or 64 bit mode.

  • In 64 bit mode, all the contents of a register are used to address the data.
  • In 31 bit mode only the bottom(right) half of the register are used to address the data – the top half is ignored
  • In 24 bit mode, only the bottom 24 bits of the register are used to address the data.

There are various instructions which change which mode the program is executing in.

When a BAKR, or BASSM ( Branch and Save, and set Mode) is used, the return address is changed to set the mode ( 24,31,64) as part of the saved data. When this address is used as a branch back – the save mode information is used to switch back to the original mode.

When BALR (or BASR) is used to branch to a routine, the return address is saved in Register 14. The mode information is not stored. When this address is used as a branch back – the “default mode” information (24 bit) is used to set the mode. This means the return tries to execute with a 24 bit address – and it fails.

You solve this problem by using a (BRANCH AND SET MODE) BSM 14,0 instruction. The value of 0 says do not branch, so this just updates register 14 with the state information. When BAKR is issued, the correct state is saved with it.

If you use the “correct” linkage you do not need to use BSM. It is only needed because the C code is using an out dated interface. It still uses this interface for compatibility with historically compiled programs.

Note: BSM 0,14 is also a common usage. It is the standard return instruction in a program entered by means of BRANCH AND SAVE AND SET MODE (BASSM) or a BRANCH AND SAVE (BAS). It means branch to the address in register 14, and set the appropriate AMODE, but do not save in the linkage stack.

Using 64 and 31 bit registers

Having grown up with 32 bit registers, it took a little while to understand the usage 64 bit registers.

In picture terms all registers are 64 bit, but you can have a piece of paper with a hole in it which only shows the right 32 bit part of it.

When using the full 64 bit registers the instructions have a G in them.

  • LR 2,3 copies the 32 bit value from register 3 into the right 32 bits of register 2
  • LGR 2,3 copies the value of the 64 bit register 3 into the 64 bit register 2

If there is a block of storage at TEST with content 12345678, ABCDEFG

  • R13 has 22222222 33333333
  • copy R13 into Reg 7. LGR R7,R13. R7 contains 22222222 33333333
  • L R7,TEST. 32 bit load of data into R7. R7 now has 22222222 12345678. This has loaded the visible “32 bit” (4 bytes) part of the register, leaving the rest unchanged.
  • LG R8,TEST. 64 bit load into Reg 8. R8 now has 12345678 ABCDEFG . The 8 bytes have been loaded.
  • “clear high R9” LLGTR R9,R9. R9 has 00000000 ……… See below.
  • L R9,TEST . 32 bit (4 bytes) load into R9. R9 now has 00000000 12345678

Before you use any register in 32 bit code you need to clear the top.

The Load Logical Thirty One Bits (LLGTR R1,R2) instruction, takes the “right hand part” of R2 and copies it to the right hand of R1, and sets the left hand part of R1 to 0. Or to rephrase it, LLGTR R2,R2 just clears the top of the register.

Using QEDIT to catch operator commands

QEDIT is an interface which allows an application to process operator commands start, modify and stop, on the address space.For example

  • S PYTASK,,,’STARTPARM’
  • f PYTASK,’lower case data’
  • p PYTASK

Internally the QEDIT code uses CIBs (Console Information Block) which have information about the operator action

(I think QEDIT comes from “editing”= removing the Queue of CIBs once they have been processed – so Q EDIT).

The interface provides an ECB for the application to wait on.

The documentation was ok, but could be clearer. For example the sample code is wrong.

In my case I wanted a Python thread which used console.get() to wait for the operator action and return the data. You then issue the console.get() to get the next operator action.

My program had logic like

  • Use the extract macro to get the address of the CIBS.
  • If this job is a started task, and it is the ‘get’ action then there will be a CIB with type=Started
    • Return the data to the requester
    • Remove the CIB from the chain
    • Return to the requester
  • Set the backlog of CIBs supported ( the number of operator requests which can be outstanding)
  • WAIT on the ECB
  • Return the data to the requester
  • Remove the CIB from the chain
  • Return to the requester

The action can be “Start”, “Modify”, or “Stop”

Can I use __ASM__ within C code to generate my own assembler?

In theory yes. The documentation is missing a lot of information. I could not get my simplest tests to work and return what I was expecting.