Setting up for PING can be difficult!

  • Setting up ping within one machine is trivial
  • Setting up ping between two machines is relatively easy
  • Setting up ping with 3 machines can be hard to get right and to get working.

Most documentation describes how to set up PING between two machines, and does not mention anything more complex.

Ancient history (well 1980’s)

To understand modern TCPIP networks, it helps to know about the origins of Ethernet.

Originally with Ethernet there was a bus; a very long cable. You plugged your computer into this bus. Each computer on the bus had a unique 48 bit MAC address. You could send a request to another computer on this bus using their Ethernet address. You could send a request to ALL computers on this bus. These days this might be, send to all computers:”does anyone have this IP address…?”

An Ethernet bus is connected to an Ethernet router which can route packets to other routers.

Ethernet has evolved and instead of a long bus shared by all users, you have a switch device, and you plug the Ethernet cable from your computer to the switch. Conceptually it is the same, but with the switch you get much better performance and usability. You still have routers to get between different Ethernet environments. The request send to all computers:”does anyone have this IP address…?” goes to the switch, and the switch sends it to all plugged in computers.

When you start TCPIP, it configures the Ethernet hardware adapter with the addresses is should listen for.

Other terminology and concepts

  • A router is a networking device which forwards packets between different networks. Packets of data get sent from the originator through zero or more routers to the final destination.
  • A gateway has several meanings
    • It can connect different network types, for example act as a protocol translator, it can have a built-in firewall
    • It can route IP traffic between different IP subnets
    • It can loosely mean a router

Usually a router or gateway is a dedicated device or hardware.

You can have a computer act as a router or gateway; it takes data from one interface and routes it to a different interface. The computer can pass the data through a fire wall, or transform it, for example converting internal IP addresses to external IP addresses (NAT translation).

At a concept level, a computer’s network adapter is configured only to listen for packets which match the IP addresses of the interface, and ignore the rest.

If you want a computer to act as a router or gateway, you need to configure the network adapter to listen to all traffic, and to process the traffic. This is important when routing traffic from computer A through computer B to computer C.

A simple one hop ping, first time

I have set up my laptop to talk to a server over Ethernet.

I have configured my laptop using

sudo ip -6 addr add 2001:7::/64 dev enp0s31f6

This says for any IP V6 address starting with 2001:0007:0000:0000 then try sending it down the Ethernet cable with interface name enp0s31f6.

Ping a non existing address

If I try to ping an address say 2001:7::99 a packet is sent to all computers on the Ethernet bus

from myipAddress to everyone on the Ethernet bus, does anyone have IP address 2001:7::99?

There is no reply because no computer on the bus, has the address.

Ping the adjacent box

On the server, the IP address of the Ethernet cable is 2001:7::2.

If I ping this address, there are the following flows

From myipAddress (and myMAC) to everyone on the Ethernet bus, does anyone have IP address 2001:7::2?
From 2001:7::2 to myipAddress (MAC), yes I have that IP address. My Ethernet MAC address is …

My laptop puts the remote IP address and its MAC address into its neighbour cache, then issues the ping request.

The ping request looks in the neighbour cache find the IP address and MAC address and issues.

From myipAddress to 2001:7::2 at macAddress .. ping request.

The second and later times I issue a ping

The ping request looks in the neighbour cache find the IP address and MAC address and issues the ping

From myipAddress to 2001:7::2 at macAddress .. ping request.

So while the neighbour cache still has the IP of the target, and it’s MAC address, the ping can be routed to the next hop.

Setting up a multi hop ping

I have my laptop connected via a physical Ethernet cable to my server. The server is connected to z/OS via a virtual Ethernet connection. The IP address of the z/OS end of the virtual Ethernet cable is 2001:7::4.

The ping 2001:7::4 request on my laptop, does not work (as we saw above), because TCP asked everyone on the Ethernet bus if it has address 2001:7::4 – and no machine replies.

You need to define the link to the server machine as a gateway or router-like device which can handle IP addresses which are not on the Ethernet bus. You define it like

sudo ip -6 route add 2001:7::/64 via 2001:7::2

This says for any traffic with IP address 2001:0007:0000:0000:* send it via address 2001:7::2.

This requires 2001:7::2 to be known about, and so needs the following to be configured first

sudo ip -6 route add 2001:7::2 dev enp0s31f6

for TCPIP to know where to send the traffic to.

The route add 2001:7::2 dev enp0s31f6 command sends a broadcast on the Ethernet bus asking – does anyone have 2001:7::2. My server replies saying yes I have it. This is the same as for the ping above.

In summary

To send traffic on the same Ethernet bus you use

sudo ip -6 route add 2001:7::2 dev enp0s31f6

To route it via a router, switch, or computer acting as a router or switch you need.

sudo ip -6 route add 2001:7::/64 via 2001:7::2 or
sudo ip -6 route add 2001:7::/64 via 2001:7::2 <dev enp0s31f6>

The computer acting as a router or switch may need additional configuration. For example to allow it to route traffic from one Ethernet bus to another Ethernet bus you need to enable packet forwarding. On Linux to enable forwarding for all interfaces use

sysctl -w net.ipv6.conf.all.forwarding=1

On z/OS you use a static route such as

ROUTE 2001:7::/64 2001:7::3 JFPORTCP6

Where 2001:7::/64 is the range of addresses 2001:7::3 is (one of) the addresses of the interface at the remote end of the cabl, and JFPORTCP6 is the interface name. This is similar to the Linux route statement above.

You might need to set up the firewall. On my server I needed

sudo ufw route allow in on eno1 out on tap2

What IP Address does the sender have?

This is where is starts to get more complex.

Every network connection has at least one IP address.

With IP V6

  • Each interface gets an “internal” IP address such as fe80::9b07:33a1:aa30:e272
  • You can allocate an external address using the Linux command sudo ip -6 addr add 2001:7::1 dev enp0s31f6
  • If the interface has one external IP address configured, then this will be used.
  • If the interface has more than one external address configured then the first in the list may be used (not always).
  • If the interface does not have an external IP address, then TCPIP will find one from another interface, or allocate one for the duration of the request, such as 2a00:23c5:978f:9999:210a:1e9b:94a4:c8e. This address comes from my wireless connection

When my laptop is started, only the wireless connection has an IPV6 address. A ping request to 2001:7::4 had the origin IP address of 2a00:23c5:978f:9999:210a:1e9b:94a4:c8e which is the first address in the list for the wireless connection.

You can tell ping which address to use, for example

ping -I 2a00:23c5:978f:9999:cff1:dc13:4fc6:f21b 2007:1::4 (this fails because the server does now know to send the response to the requester)

I defined a new address for the interface using

sudo ip -6 addr rep 2002:7::1 dev enp0s31f6

I could issue ping -I 2002:7::1 2001:7::4, but this failed to get a response, because the back-end and intermediate nodes, did not know how to get the response back to 2002:7::1

Does this sender address matter?

Yes, because the remote end needs to have a definition to be able to send the response back.

I had my laptop connected to a Linux server over Ethernet, which in turn was connected to z/OS over a virtual Ethernet.

On z/OS I could see the ping request arrive, but it could not send the response back because it did not know how to get to 2a00:23c5:978f:9999….

I configured the laptop end of the Ethernet to give it an IP address 2001:7::1

sudo ip -6 addr add 2001:7::1/64 dev enp0s31f6

I configured the server to laptop to have an IP address of 2001:7::2

sudo ip -6 route add 2001:7::1/128 dev eno1

I configured the server to z/OS with an IP address and a route

sudo ip -6 addr add 2001:7::3/128 dev tap2
sudo ip -6 route add 2001:7::4/128 dev tap2

Now when I did the ping, the originator was 2001:7::1.

I configured the z/OS interface to send stuff back

INTERFACE JFPORTCP6 
DEFINE IPAQENET6
CHPIDTYPE OSD
PORTNAME PORTC

INTERFACE JFPORTCP6
ADDADDR 2001:DB8:8::9
INTERFACE JFPORTCP6
ADDADDR 2001:DB8::9
INTERFACE JFPORTCP6
ADDADDR 2001:7::4

START JFPORTCP6

and the routing

BEGINRoutes 
; Destination Gateway LinkName Size
ROUTE 2001:7::/64 2001:7::3 JFPORTCP6 MTU 5000
...
ENDRoutes

This says that all traffic with destination address 2001:0007:0000:000…. send to interface JFPORTCP6. This interface is connected to a gateway with the address of the remote end of the Ethernet (so on the server) of 2001:7::3.

The server machine needs to act as a router between the different Ethernet buses. You can display and configure this using

sysctl -a |grep forwarding
sudo sysctl -w net.ipv6.conf.all.forwarding=1

Packet forwarding on z/OS

By default the z/OS interface only listens for packets with one of the IP addresses of the interface. For z/OS to be able to be a router; accept all packets, and route them to other interfaces you need:

  • IPCONFIG DATAGRAMFWD in the TCP/IP Profile
  • PRIROUTER on the Interface definition . This configures the Ethernet adapter (hardware) so If a datagram is received at this device for an unknown IP address, the datagram is routed to this TCP/IP instance.

But normally if you are running z/OS you use a cheaper, physical router, rather than use the z/OS to do your routing. It might only be people like me who run z/OS on their laptop who want to try routing through z/OS.

Passing parameters to Java programs

There is a quote by Alexander Pope

A little learning is a dangerous thing.
Drink deep, or taste not the Pierian Spring;
There shallow draughts intoxicate the brain,
and drinking largely sobers us again.”

I was having problems passing parameters to Java. I had a little knowledge about Java web servers on z/OS, so I was confident I knew how to configure a web browser; unfortunately I did not have enough knowledge.

I was trying to get keyrings working with a web server on z/OS, but what I specified was not being picked up. In this post I’ll focus on keyrings, but it applies to other parameters are well.

Setting the default values

You can give Java a default keyring name

-Djavax.net.ssl.keyStore=safkeyring://START1/MQRING

Which says, if the user does not specify a keystore then use this value. If the user does not create a keystore, then a default keystore is created using these parameters.

Specify the value yourself

You can create your own keystore, passing keystore name and keystore type. You could use you own option, such as -DCOLINKEYRINGNAME=safkeyring://START1/MQRING and use that when configuring the keystore to Java. In this case the -Djavax.net.ssl.keyStore=safkeyring://START1/MQRING is not used. This is the same for other parameters.

Web server configuation

The Apache web server, and IBM Liberty web servers, are configured using server.xml files, and the web server creates the keystores etc using the information in the file.

For example a web server may have content like

<Connector port="${port.http}" protocol="${http.protocol}" 
     SSLEnabled="true" 
     maxThreads="550" scheme="https" secure="true" 
     clientAuth="false" sslProtocol="TLS" 
     keystoreFile="${keystoreFile}" 
     keystorePass="${keystorePass}" 
     keystoreType="${keystoreType}" 
     sslEnabledProtocols="${ssl.enabled.protocols}" /> 

where the ${…} is an environment variable. ${keystoreFile} would use the environment variable keystoreFile or option -DkeystoreFile=”safkeyring//START1/MYRING”

You can have multiple <Connector..> each with different parameters, so using -Djavax.net.ssl.keyStore etc would not work.

You need to know which options to use, because setting the Java defaults using -Djavax.net.ssl.keyStore… may be overridden.

You also need to know which server.xml file to use! I was getting frustrated when the changes I made to the server.xml was not the server.xml being used!

Write instructions for your target audience – not for yourself.

Over the last couple of weeks, I’ve been asked questions about installing two products on z/OS. I looked at the installation documentation, and it was written the way I would write it for myself – it was not written for other people to follow.

I sent some comments to one of the developers, and as the comments mainly apply to the other products as well, I thought I would write them down – for when another product comes along.

I’ve been doing some documentation of for AT-TLS which allows you to give applications TLS support, without changing the application, so I’ll focus on a product using TCP/IP.

What is the environment?

The environment can range from one person running z/OS on a laptop, to running a Parallel Sysplex where you have multiple z/OS instances running as a Single System Image; and taking it further, you can have multiple sites.

What levels of software

Within a Sysplex you can have different levels of software, for example one image at z/OS 2.4 and another image at z/OS 2.5 You tend to upgrade one system to the next release, then when this has been demonstrated to be stable, migrate the other systems in turn.

Within one z/OS image you can have multiple levels of products, for example MQ 9.2.3 and MQ 9.1. People may have multiple levels so they test the newer level, and when it looks stable, they switch to the newer level and later remove the older level. If the newer level does not work in production – they can easily switch back to the previous level.

Each version may have specific requirements.

  • If your product has an SVC, you may need an SVC for each version, unless the higher level SVC supports the lower level code.
  • If your product uses a TCP/IP port, you will need a port for each instance.

You need to ensure your product can run in this environment, with more than one version installed on an image.

How do things run?

Often z/OS images and programs run for many months. For example IPLing every three months to get the latest fixes on. Your product instance may run for 3 months before restarting. If you write message to the joblog, or have output going to the JES2 spool, you want to be able to purge old output without shutting down your instance. You can specify options to “spin” off output and make the file purge-able.

Your instance may need to be able to refresh its parameters. For example, if a key in a keyring changes, you need to close and reopen the keyring. This implies a refresh command, or the keyring is opened for each request.

Who is responsible for the system?

For me – I am the only person using the system and I am responsible for every thing.

For big systems there will be functions allocated to different departments:

  • Installation of software (getting the libraries and files to the z/OS image)
  • The z/OS systems team – creating and updating the base z/OS system
  • The Security team – this may be split into platform security(RACF), and network security
  • Data management – responsible for data, backup (and restore), migration of unused data sets to tape, ensuring there is enough disk space available.
  • Communications team – responsible for TCPIP connectivity, DNS, firewalls etc.
  • Database team – responsible for DB2 and other products
  • Liberty and z/OSMF etc built on top of Liberty.
  • MQ – responsible for MQ, and MQ to MQ connectivity.

Some responsibilities could be done by different teams, for example creating the security profile when creating a started task. This is a “security” task – but the z/OS systems programmer will usually do it.

How are systems changes managed?

Changes are usually made on a test system and migrated into production. I’ve seen a rule “nothing goes into production which has not been tested”. Some implications of this are

  • No changes are typed into production. A file can be copied into production, and a file may have symbolic substitution, such as SYSTEM=&SYSNAME. You can use cut and paste, but no typing. This eliminates problems like 0 being misread as O, and 1,i,l looking similar.
  • Changes are automated.
  • Every change needs a back-out process – and this back-out has been tested.
    • Delete is a 2 phase operation. Today you do a rename rather than a delete; next week you do the actual delete. If there is a problem with the change you can just rename it back again. Some objects have non obvious attributes, and if you recreate an object, it may be different, and not work the same way as it used to.

There are usually change review meetings. You have to write a change request, outlining

  • the change description
  • the impact on the existing system
  • the back-out plan
  • dependencies
  • which areas are affected.

You might have one change request for all areas (z/OS, security, networking), or a change request for each area, one for z/OS, one for security, one for networking.

Affected areas have to approve changes in their area.

How to write installation instructions

You need to be aware of differences between installing a product first time, and successive times. For example creating a security definition. It is easy to re-test an install, and not realise you already have security profiles set up. A pristine new image is great for testing installation because it is clean, and you have to do everything.

Instructions like

  • Task 1 – create sys1.proclib member
  • Task 2 – define security profile
  • Task 3 – allocate disk storage
  • Task 4 – define another security profile
  • Task 5 – update parmlib

may make sense when one person is doing the work, but not if there are many teams.

It is better to have a summary by role like

  • z/OS systems programmer
    • create proclib member
    • update parmlib
  • Security team
    • Define security profile 1
    • Define security profile 2
  • Storage management team
    • Allocate disk space

and have links to the specific topics. This way it is very clear what a team’s responsibilities are, and you can raise one change request per team.

This summary also gives a good road map so you can see the scale of the installation task.

It is also good to indicate if this needs to be done once only per z/OS image, or for every instance. For example

  • APF authorise the load libraries – once per z/OS image
  • Create a JCL procedure in SYS1.PROCLIB – once per instance

Some tasks for the different roles

z/OS system programmers

  • Create alias for MYPROD.* to a user catalog
  • APF authorise MYPROD…. datasets
  • Create PARMLIB entries
  • Update LNKLST and LPA
  • Update PROCLIB concatenation with product JCL
  • Create security profiles for any started tasks; which userid should be used?
  • WLM classification of the started task or job.
  • Schedule delete of any old log files older than a specified criteria
  • When multiple instances per LPAR, decide whether to use S MYSTASK1, S MYSTASK2, or S MYSTASK.T1, S MYSTASK.T2
  • Do you need to specify JESLOG SPIN to allows JES2 logs to be spun regulary, or when they are greater than a certain size, or any DD SYSOUT with SPIN?
  • ISPF
    • Add any ISPF Panels etc into logon procedures, or provide CLIST to do it.
    • Update your ISPF “extras” panel to add product to page.
  • Try to avoid SVCs. There are better ways, for example using authorized services.
  • Propagate the changes to all systems in the Sysplex.
  • What CF structures are needed. Do they have any specific characteristics, such as duplexed?
  • How much (e)CSA is needed, for each product instance.
  • Does your product need any Storage Class Memory (SCM).

Security team

  • Create groups as needed eg MYPRODSYS, MYPRODRO, and make requester’s userid group special, so they can add and remove userids to and from the groups.
  • Create a userid for the started task. Create the userid with NOPASSWORD, to prevent people logging on with the userid and password.
  • Protect the MYPROD.* datasets, for example members of group MYPRODSYS can update the datasets, members of group MYPRODRO only have read-only access.
  • Create any other profiles.
  • Create any certificate or keyrings, and give users access to them.
  • Set up profiles for who can issue operator commands against the jobs or procedures.
  • Does the product require an “applid”. For example users much have access to a specific APPL to be able to use the facilities. An application can use pthread_security_applid_np, to change the userid a thread is running on – but they must have access to an applid. The default applid is OMVSAPPL.
  • Do users needing to use this product need anything specific? Such as id(0), needing a Unix Segment, or access to any protected resources? See below for id(0).
  • If a client authenticates to the server, the server needs access to BPX.SERVER in the RACF FACILITY.
  • The started task userid may need access to BPX.DAEMON.
  • If a userid needs access to another user’s keyring, the requestor needs read access to user.ring.LST in CLASS(RDATALIB) or access to IRR.DIGTCERT.LISTRING.
  • If a userid needs access to a private key in a keyring the requester needs If a userid needs access to another user’s keyring, the requester needs control access to user.ring.LST in CLASS(RDATALIB).
  • You might need to program control data sets, for example RDEF PROGRAM * ADDMEM(‘SYS1.LINKLIB’//NOPADCHK) UACC(READ) .
  • Users may need access to ICSF class CSFSERV and CSFKEYS.
  • Use of CLASS(SURROGAT) BPX.SRV.<userid> to allow one userid to be a surrogate for another userid.
  • Use of CLASS(FACILITY) BPX.CONSOLE to remove the generation of BPXM023I messages on the syslog.

Storage team

  • How much disk space is needed once the product has been installed, for data sets, and Unix file systems. This includes product libraries and instance data, and logs which can grow without limit.
  • How much temporary space is needed during the install.
  • Where do Unix files for the product go? for example /opt/ or /var….
  • Where do instance files go. For example on image local disks, or sysplex shared disks. You have an instance on every member of the Sysplex – where you do put the instance files?
  • How much data will be produced in normal running – for example traces or logs.
  • When can the data be pruned?
  • Does the product need its own ZFS for instance data, to keep it isolated and so cannot impact other products.
  • Are any additional Storage Classes etc needing to be defined? These determine if and when datasets are migrated to tape, or get deleted.
  • Are any other definitions needed. For example for datasets MYPROD.LOG*, they need to go on the fastest disks, MYPROD.SAMPLES* can go on any disks, and could be migrated.

Database team

  • What databases, tables,indexes etc are required?
  • How much disk space is needed.
  • What volume of updates per second. Can the existing DB2 instances sustain the additional throughput?
  • What security and protection is needed at the table level and at the field level.
  • What groups are permitted to access which fields?
  • What auditing is needed?
  • Is encryption needed?

MQ

  • Do you need to uses MQ Shared Queue between queue managers?
  • How much data will be logged per second?
  • What is the space needed for the message storage, disk space, buffer pool and Coupling Facility?
  • Product specific definitions.
  • Security protection of any product specific definitions.

Networking

  • Which port(s) to use?
    • Do you need to control access to ports with the SAF resource on the PORT entry, and permit access to profile EZB.PORTACCESS.sysname.tcpname.resname
    • Use of SHAREPORT and SHAREPORTWLM
  • Use of Sysplex Distributor to route work coming in to a Sysplex to any available system?
  • Update the port list – so only specific job can use it
  • RACF profile for port?
  • Which cipher specs
  • Which level of TLS
  • Which certificates
  • Any AT-TLS profile?
  • Any firewall changes?
  • Any class of service?
  • Any changes to syslogd profile?
  • Are there any additional sites that will be accessed, and so need adding to the “allow” list.

Automation

  • If the started tasks, or jobs need to be started at IPL, create the definitions. Do they have any pre-reqs, for example requiring DB2 to be active.
  • If the jobs are shutdown during the day, should they be automatically restarted?
  • Add automation to shut down any jobs or started tasks, when the system is shutdown
  • Which product messages need to be managed – such as events requiring operator action, or events reported to the site wide monitoring team.

Operations

  • Play book for product, how to start, and stop it
  • Are there any other commands?

Monitoring

  • Any SMF data needed to be collected.
  • Any other monitoring.
  • How much additional CPU will be needed – at first, and in the long term.

Making your product secure

Many sites are ultra careful about keeping their system secure. The philosophy is give a user access for what they need to do – but no more. For example

  • They will not be comfortable installing a non IBM SVC into their system. An SVC can be used from any address space, so if there is any weakness in the SVC it could be exploiter.
  • Using id(0) (superuser) in Unix Services is not allowed. The userid needs to be given specific permission. If the code “changes userid” then services like pthread_security_applid_np() should be used; where the applid is part of the configuration. Alternatives include __login_applid. End users of this facility will need read access to the specific applid.

TLS and SSL

If you are using TLS there are other considerations

  • Any certificate you generate needs a long validity date, and JCL to recreate it when it expires.
  • If you create a Certificate Authority you need to document how to export it and distribute it to other platforms
  • Browsers and application may verify the host name, so you need to generate a certificate with a valid name. The external z/OS name may be different from the internal name.
  • You should support TLS V1.2 and TLS 1.3 Other TLS and SSL versions are deprecated.
  • It is good practice to have one keyring with the server certificate with its private key, and a “common” trust store keyring which has the Certificate Authorities for all the sites connecting to the z/OS image. If you connect to a new site, you update the common keyring, and all applications pick up the new CA. If you have one keyring just for your instance, you need to maintain multiple keyrings when a new certificate is added, one for each application.

Using enclaves in a C program.

On z/OS enclaves allow you to set the priority of business transactions within your program, and to record the CPU used by the threads involved in the transaction – even if they are in a different address space.

Many of the WLM functions are assembler macros which require supervisor state. There are C run time functions which do some of the WLM functions. There are also Java methods which invoke the C run time functions.

Not all of the WLM functions are available in the C run time environment. You can set up enclaves, but the application cannot query information about the enclave, such as total CPU used, or the WLM parameters.

Minimal C program

The minimal program is below, and the key WLM functions are explained afterwards.

#include <sys/__wlm.h>
int main(void) { 
  wlmetok_t  enclavetoken; 
  server_classify_t   classify; 
  long rc; 
  // 
  //  Connect to work manager 
  // 
  unsigned int  connectToken = ConnectWorkMgr("JES","SM3");
  classify = __server_classify_create( ); 
  // pass the connection token to the classify
  //  This is needed but not documented 
  rc = __server_classify(classify, 
                         _SERVER_CLASSIFY_CONNTKN, 
                        (char *) connectToken 
       ); 
  rc = __server_classify(classify, 
                         _SERVER_CLASSIFY_TRANSACTION_NAME, 
                         "TCI2" 
  for ( int loop=0;loop < 1000 ;loop++) 
  { 
    rc= CreateWorkUnit(&enclavetoken, 
                       classify, 
                       NULL, 
                       "COLINS"   ); 
    rc = JoinWorkUnit(& enclavetoken); 
 
    // do some work to burn some CPU 
    for ( int i = 0;i< 100000 ;i++) 
    { 
      double xx = i/0.5; 
      double yy = xx * xx; 
    } 
    rc = LeaveWorkUnit(&enclavetoken); 
    rc = DeleteWorkUnit(&enclavetoken);   
    rc = DisconnectServer(&connectToken);
}
 

What are they key functions?

unsigned int connectToken = ConnectWorkMgr(“JES”,”SM3″); 

This creates a connection to WLM, and uses the subsystem JES, and subsystem name of SM3.  Note: On my system it is JES, not JES2.   The WLM dialogs, option 6. Classification Rules list the subsystems available.  You can browse a subsystem type and see the available definitions.  I had

         -------Qualifier--------                 -------Class--------  
Action   Type Name     Start        Service     Report   
 ____  1 SI   SM3      ___          TCI1SC      THRU
 ____  2   TN  TCI3    ___          TCI1SC      TCI3
 ____  2   TN  TCI2    ___          TCI1SC      TCI2

server_classify_t  classify = __server_classify_create( );

CreateWorkUnit, the function used  to create an independent enclave (business transaction), needs to be able to classify the transaction to determine what service class (priority) to give the enclave.  This request sets up the classify control block.

rc = __server_classify(classify, _SERVER_CLASSIFY_CONNTKN, (char *)&connectToken );

The documentation does not tell you to pass the connection token.  If you omit this step the CreateWorkUnit fails with error code errno2=0x0330083B.

The __server_classify expects a char * as the value, so you have to use (char *) & connectionToken.

rc = __server_classify(classify, _SERVER_CLASSIFY_TRANSACTION_NAME, “TCI2” );

This tells WLM about the transaction we want to use.  TRANSACTION_NAME matches up with TN above in the WLM definitions.  This says the business transaction is called TCI2.  There are other criteria such as userid, plan or LU. See here for the list.   The list is incomplete, as it does not support classifiers like Client IP address which is available with the assembler macros.

rc= CreateWorkUnit(&enclavetoken, classify,  NULL, “COLINS” );

This uses the classification parameter defined above, to create the independent enclave, and return the enclave token. 

The documentation for CreateWorkUnit says you need to pass the arrival time,  Address of a doubleword (unsigned long long) field that contains the arrival time of the work request  in STCK format.  I created a small assembler function which just returned a STCK value to my C program.  However I passed NULL and it seemed to produce the correct values  – I think CreateWorkUnit does a STCK for you.

You pass in the name of the function(“COLIN”).  The only place I had seen this is if you use the QueryWorkUnitClassification() to extract the classify information.  For example QueryWorkUnitClassification gave a control block with non empty fields   _ecdtrxn[8]=TCI2 , _ecdsubt[4]=JES , _ecdfcn[8]=COLIN  , _ecdsubn[8]=SM3 . This function does not return the report class or service class.

rc = JoinWorkUnit(& enclavetoken);

This cause any work this TCB does to be recorded against the enclave.

rc =LeaveWorkUnit(&enclavetoken);

This stop work being be recorded against the enclave.  Subsequent work gets charged to the home address space.

rc= DeleteWorkUnit(& enclavetoken);

The business transaction has finished.  Information about the response time and CPU used are stored.

rc =  DisconnectServer(&connectToken);

This disconnects from WLM.

Using a subtask

I had a different thread n the program which did some work for the transaction. Using the enclave token, this work can be recorded against the transaction using

// The enclave token is passed with the request
rc = JoinWorkUnit(&enclaveToken); 
do some work...
rc = LeaveWorkUnit(&enclaveToken);  

This would be useful if you are using a connect pool to connect to MQ or DB2 subsystem. You have a pool of threads which have done the expensive connect with a particular userid, and the thread is used to execute the MQ or DB2 subsystem as that userid.

Other function available

Other functions available

Dependent (address space) enclave

The above discussion was for an business transaction, know in the publications as an Independent enclave.   An address space can have a Dependent enclave where the CPU is recorded as “Dependent Enclave” within the address space.  You use the function ContinueWorkUnit(&enclave) to return the enclave token.    You then use JoinWorkUnit and LeaveWorkUnit as before.  I can not see why you might want to use this.

Display the classification

You can use the QueryWorkUnitClassification to return a structure for the classification.

Reset the classification.

If you want to classify a different transaction, you can use server_classify_init() to reset the structure.

Set up a server

You can set up a server where your application puts work onto WLM queues, and other threads can get work.   This is an advanced topic which I have not looked into.

Make your enclave visible across the sysplex

You can use ExportWorkUnit  and ImportWorkUnit to have your enclave be visible in the sysplex.

Query what systems in the sysplex are running in goal mode.

You can use QueryMetrics() to obtain the systems in the sysplex that are in goal mode. This includes  available CPU capacity and resource constraint status. 

What is not available to the C interface

One reason why I was investigating enclaves was to understand the enclave data in the SMF 30 records.  There is an assembler macro IWMEQTME which returns the CPU, ZIIP and ZAPP times, used by the independent enclaves.  Unfortunately this requires supervisor state.    I wrote some assembler code to extract this and display the data.  Another complication is that the IWLM macros are AMODE 31 – so it did not work with my 64 bit C program.

Enclaves in practice. How to capture all the CPU your application uses, and knowing where to look for it.

Z/OS has enclaves to manage work.  

  1. When an enclave is used, a transaction can issue a DB2 request, then DB2 uses some TCBs in the DB2 address spaces on behalf of the original request.  The CPU used used by these DB2 TCBs can be charged back to the original  application. 
  2. When an enclave is not used, the CPU used by the original TCB is charged to its address space, and the DB2 TCBs are charged to the DB2 address space.  You do not get a complete picture of the CPU used by the application.
  3. A transaction can be defined to Work Load Managed(WLM) to set the priority of the transaction, so online transactions have high priority, and background work gets low priority.    With an enclave, the DB2 TCBs have the same priority as the original request.  With no enclave the TCBs have the priority as determined by DB2.

When an application sets up an enclave

  1. Threads can join the enclave, so any CPU the thread uses while in the enclave, is recorded against the enclave.
  2. These threads can be in the same address space, a different address space on the same LPAR, or even in a different LPAR in the SYSPLEX.
  3. Enclaves are closely integrated with Work Load Manager(WLM).   When you create an enclave you can give information about the business transaction, (such as transaction name and  userid).   You classify the application against different factors. 
  4. The classification maps to a service class.   This service class determines the appropriate priority profile.  Any threads using the enclave will get this priority.
  5. WLM reports on the elapsed time of the business transaction, and the CPU used.

What enclave types are there?

In this simple explanation there are two enclave types

  1. Independent enclave – what I think of as Business Transaction, where work can span multiple address spaces.  You pass transaction information (transaction, userid, etc) to WLM so it can set the priority for the enclave. You can get reports on the enclave showing elapsed time, and CPU used.  There can be many independent enclaves in the lifetime of a job.  You can have these enclaves running in parallel within a job.
  2. Dependent enclave or Address space enclave.   I cannot see the reason for this.  This is for tasks running within an address space which are not doing work for an independent enclave.  It could be used for work related to transactions in general.   In the SMF 30 job information records you get information on CPU used in the dependent enclave.  
  3. Work not in an enclave.  Threads by default run with the priority assigned to the address space.  CPU is charged to the address space.

To help me understand enclave reports, I set up two jobs

  1. The parent job,
    1. Creates an independent (transactional) enclave with “subsystem=JES, definition=SM3” and “TRANSACTION NAME=TCI2”.  It displays the enclave token.
    2. Sets up a dependent enclave.
    3. Joins the dependent enclave.
    4. Does some CPU intensive work.
    5. Sleeps for 30 seconds.
    6. Leaves the dependent enclave.
    7. Deletes the dependent enclave.
    8. Deletes the independent enclave.
    9. Ends.
  2. The child, or subtask, job.
    1. This reads the enclave token as a parameter.
    2. Joins the enclave,if the enclave does not exist, use the dependent enclave.
    3. Does some CPU intensive work.
    4. Leaves the enclave.
    5. Ends.

Where is information reported?

  1. Information about a job and the resources used by the job is in SMF 30 records. It reports total CPU used,  CPU used by independent enclaves, CPU used by the dependant enclave.  In JCL output where it reports the CPU etc used by the job step, this comes from the SMF 30 record.
  2. Information about the independent enclave is summarised in an SMF 72 record over a period(typically 15 minutes) and tells you information about the response time distribution, and the CPU used.

I used three scenarios

  1. Run the parent job – but not not the child.  This shows the costs of the just parent – when there is no child running the workload for it.
  2. Run the child but not the parent.   This shows the cost of the expensive workload.
  3. Both parent and child active.   This shows the costs of running the independent enclave  in the child are charged to the parent. 

SMF job resource used report

From the SMF 30 record we get the CPU for the Independent enclave.

Parent CPUChild CPU
Parent, no child
Total              : 0.070
Dependent enclave : 0.020
Not applicable
Child,no parentNot applicable
Total CPU         : 2.930
Dependent Enclave : 2.900
Non Enclave : 0.030
Parent and child
Total               : 2.860 
Independent enclave : 2.820
Dependent enclave : 0.010
Non enclave : 0.030
Total            : 0.020
No enclave CPU reported

From the parent and child we can see that the CPU used by the enclave work in the child job has been recorded against the parent’s job under “Independent enclave CPU”.

The SMF type 30 record shows the Parent job had CPU under Independent enclave, Dependent enclave, and a small amount (0.03) which was not enclave.

SMF WLM reports

From the SMF 72 data displayed by RMF (see below for an example) you get the number of transactions and CPU usage for the report class, and service class. I had a report class for each of the parent, and the child job, and for the Independent enclave transaction.

Total CPUElapsed timeEnded
Parent2.81834.971
Child2.6366.761
Business transaction2.81930.041

It is clear there is some double accounting. The CPU used for the child doing enclave processing, is also recorded in the Parent’s cost. The CPU used for the Business transaction is another view of the data from the parent and child address spaces.

For charging based on address spaces you should use the SMF 30 records.

You can use the SMF 72 records for reporting on the transaction costs.

RMF workload reports

When processing the SMF data using RMF you get out workload reports

//POST EXEC PGM=ERBRMFPP 
//MFPINPUT DD DISP=SHR,DSN=smfinput.dataset  
//SYSIN DD * 
SYSRPTS(WLMGL(SCPER,RCLASS,RCPER,SCLASS)) 
/* 

For the child address space report class RMF reported

-TRANSACTIONS--  TRANS-TIME HHH.MM.SS.FFFFFF
 AVG        0.05  ACTUAL             6.760380
 MPL        0.05  EXECUTION          6.254239
 ENDED         1  QUEUED               506141
 ...
 ----SERVICE----   SERVICE TIME  ---APPL %---
 IOC        2152   CPU    2.348  CP      2.20
 CPU        2012   SRB    0.085  IIPCP   0.00
 MSO         502   RCT    0.004  IIP     0.00
 SRB          73   IIT    0.197  AAPCP   0.00
 TOT        4739   HST    0.002  AAP      N/A

There was 1 occurrence of the child job, it ran for 6.76 seconds on average, and used a total of 2.636 seconds of CPU (if you add up the service time).

For a more typical job using many short duration independent enclaves the report looked like

-TRANSACTIONS--  TRANS-TIME HHH.MM.SS.FFFFFF 
 AVG        0.11  ACTUAL                13395 
 MPL        0.11  EXECUTION             13395 
 ENDED      1000  QUEUED                    0 
 END/S      8.33  R/S AFFIN                 0 
 SWAPS         0  INELIGIBLE                0
 EXCTD         0  CONVERSION                0 
                  STD DEV                1325 
 ----SERVICE----   SERVICE TIME  
 IOC           0   CPU    1.448   
 CPU        1241   SRB    0.000   
 MSO           0   RCT    0.000  
 SRB           0   IIT    0.000  
 TOT        1241   HST    0.000  

This shows 1000 transaction ended in the period and the average transaction response time was 13.395 milliseconds. The total CPU time used was 1.448 seconds, or an average of 1.448 milliseconds of CPU per transaction.

For the service class with a response time definition, you get a response time profile. The data below shows the most most response times were between 15 and 20 ms. The service class was defined with “Average response time of 00:00:00.010”. This drives the range of response times reported. If this data was for a production system you may want to adjust the “Average response time” to 00:00:00.015 to get the peak in the middle of the range.

-----TIME--- -% TRANSACTIONS-    0  10   20   30   40   50 
    HH.MM.SS.FFF CUM TOTAL BUCKET|...|....|....|....|....|
 <= 00.00.00.005       0.0  0.0   >                           
 <= 00.00.00.006       0.0  0.0  >                           
 <= 00.00.00.007       0.0  0.0  >                           
 <= 00.00.00.008       0.0  0.0  >                           
 <= 00.00.00.009       0.0  0.0  >                           
 <= 00.00.00.010       0.0  0.0  >                           
 <= 00.00.00.011       0.3  0.3  >                           
 <= 00.00.00.012      10.6 10.3  >>>>>>                      
 <= 00.00.00.013      24.9 14.3  >>>>>>>>                    
 <= 00.00.00.014      52.3 27.4  >>>>>>>>>>>>>>              
 <= 00.00.00.015      96.7 44.4  >>>>>>>>>>>>>>>>>>>>>>>     
 <= 00.00.00.020     100.0  3.2  >>                          
 <= 00.00.00.040     100.0  0.1  >                           
    00.00.00.040     100.0  0.0  >                                                                                

Take care.

The field ENDED is the number of transactions that ended in the interval. If you have a measurement that spans an interval, you will get CPU usage in both intervals, but “ENDED” only when the transaction ends. With a large number of transactions this effect is small. With few transactions it could cause divide by zero exceptions!

No, No, think before you create a naming convention

I remember doing a review of a large customer who had grown by mergers and acquisitions.  We were discussing naming conventions, and did they have them.

“Naming conventions”, he said “we love them.  We have hundreds of them around the place”. He said it was to hard and disruptive to try to get it down to a small number of naming conventions.

I saw someone’s MQ configuration and wished they had thought through their naming convention, or asked someone with more experience.  This is what I saw

  • The MQ libraries were called CSQ910.SCSQAUTH
    • This is OK as it tells you what level of MQ you are using
    • It would be good to have a dataset alias of CSQ pointing to CSQ910.  Without this you have to change the JCL for all job, compiles, runs etc which had CSQ910.  When you moved from CSQ810 to CSQ900 you have change the JCL. If you then decide to go back to CSQ810 for a week, you have to change the JCL again.  With the alias is is easy – change the alias and the JCL does not need to change.    Change the alias again – and the JCL does not need to change.
  • The MQ logs were called CSQ710.QM.LOGCOPY1.DS01, … DS02,…DS03
    • This shows the classic problem of having the queue manager release as part of the object names.  It would have been better to have names like CSQ.QM.LOGCOPY1.DS01 without the MQ version in it.
    • The name does include a queue manager name of sorts, but a queue manager name of QM is not very good.  If you need another queue manager you will have names like QM, QMA, QMB so an inconsistent name.
    • It is good to have the queue manager name as part of the data set name, so if the queue manager was QM01 then have CSQ.QM01.
  • The page sets were CSQ710.QM.PAGESET0, CSQ710.QM.PAGESET1,  CSQ710.QM.PAGESET2,  CSQ710.QM.PAGESET3,  CSQ810.QM.PAGESET4, CSQ910.QM.PAGESET5
    • This shows the naming standard problem as it evolved over time.  They added more page sets, and used the MQ release as the High Level Qualifier.  The page sets are CSQ710,… CSQ810…,  CSQ910… – following the naming standard.

You do not invent a naming convention in isolation, you need to put an architect’s hat on and see the bigger picture, where you have production and test queue managers, different versions of MQ, and see MQ is just a small part of the z/OS infrastructure.

  • People often have one queue manager per LPAR, and call MQ after the LPAR.
  • You are likely to have multiple machines – for example to provide availability, so plan for multiple queue managers.
  • You may want different HLQ to be able to identify production queue manager data sets and test queue manager data sets..
  • The security team will need to set up profiles for queue managers. Having MQPROD and MQTEST as a HLQ may make it easier to set up.
  • The storage team (what I used to call data managers)  set up SMS with rules for data set placement. For example production pagesets with a name like MQPROD.**.PSID* go on the newest, fastest, mirrored disks.  MQTEST.** go on older disks.
  • As part of the SMS definitions, the storage team define how often, and when, to backup data sets.   A production page set may be backed up using flash copy once an hour.   (This is within the Storage subsystem and takes seconds.   It takes a copy by taking a copy of the pointers to the records on disk).   Non production get backed up overnight.

 

Lessons learned

  • For the IBM provided libraries, include the VRM in the data set names.
  • Define an alias pointing to the current libraries so applications do not need to change JCL.   You could have a Unix Services alias for the files in the zFS.
  • Do not put the MQ release in the queue manager data sets names.
  • Use queue manager names that are relevant and scale.
  • Talk to your security and storage managers about the naming conventions; what you want protected, and how you want your queue manager data sets to be managed.